Sample records for band image processing

  1. Automatic DNA Diagnosis for 1D Gel Electrophoresis Images using Bio-image Processing Technique.

    PubMed

    Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Shaw, Philip J; Ukosakit, Kittipat; Tragoonrung, Somvong; Tongsima, Sissades

    2015-01-01

    DNA gel electrophoresis is a molecular biology technique for separating different sizes of DNA fragments. Applications of DNA gel electrophoresis include DNA fingerprinting (genetic diagnosis), size estimation of DNA, and DNA separation for Southern blotting. Accurate interpretation of DNA banding patterns from electrophoretic images can be laborious and error prone when a large number of bands are interrogated manually. Although many bio-imaging techniques have been proposed, none of them can fully automate the typing of DNA owing to the complexities of migration patterns typically obtained. We developed an image-processing tool that automatically calls genotypes from DNA gel electrophoresis images. The image processing workflow comprises three main steps: 1) lane segmentation, 2) extraction of DNA bands and 3) band genotyping classification. The tool was originally intended to facilitate large-scale genotyping analysis of sugarcane cultivars. We tested the proposed tool on 10 gel images (433 cultivars) obtained from polyacrylamide gel electrophoresis (PAGE) of PCR amplicons for detecting intron length polymorphisms (ILP) on one locus of the sugarcanes. These gel images demonstrated many challenges in automated lane/band segmentation in image processing including lane distortion, band deformity, high degree of noise in the background, and bands that are very close together (doublets). Using the proposed bio-imaging workflow, lanes and DNA bands contained within are properly segmented, even for adjacent bands with aberrant migration that cannot be separated by conventional techniques. The software, called GELect, automatically performs genotype calling on each lane by comparing with an all-banding reference, which was created by clustering the existing bands into the non-redundant set of reference bands. The automated genotype calling results were verified by independent manual typing by molecular biologists. This work presents an automated genotyping tool from DNA gel electrophoresis images, called GELect, which was written in Java and made available through the imageJ framework. With a novel automated image processing workflow, the tool can accurately segment lanes from a gel matrix, intelligently extract distorted and even doublet bands that are difficult to identify by existing image processing tools. Consequently, genotyping from DNA gel electrophoresis can be performed automatically allowing users to efficiently conduct large scale DNA fingerprinting via DNA gel electrophoresis. The software is freely available from http://www.biotec.or.th/gi/tools/gelect.

  2. Comparison of Photoluminescence Imaging on Starting Multi-Crystalline Silicon Wafers to Finished Cell Performance: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, S.; Yan, F.; Dorn, D.

    2012-06-01

    Photoluminescence (PL) imaging techniques can be applied to multicrystalline silicon wafers throughout the manufacturing process. Both band-to-band PL and defect-band emissions, which are longer-wavelength emissions from sub-bandgap transitions, are used to characterize wafer quality and defect content on starting multicrystalline silicon wafers and neighboring wafers processed at each step through completion of finished cells. Both PL imaging techniques spatially highlight defect regions that represent dislocations and defect clusters. The relative intensities of these imaged defect regions change with processing. Band-to-band PL on wafers in the later steps of processing shows good correlation to cell quality and performance. The defect bandmore » images show regions that change relative intensity through processing, and better correlation to cell efficiency and reverse-bias breakdown is more evident at the starting wafer stage as opposed to later process steps. We show that thermal processing in the 200 degrees - 400 degrees C range causes impurities to diffuse to different defect regions, changing their relative defect band emissions.« less

  3. LANDSAT 4 band 6 data evaluation

    NASA Technical Reports Server (NTRS)

    1984-01-01

    A series of images of a portion of a TM frame of Lake Ontario are presented. The top left frame is the TM Band 6 image, the top right image is a conventional contrast stretched image. The bottom left image is a Band 5 to Band 3 ratio image. This image is used to generate a primitive land cover classificaton. Each land cover (Water, Urban, Forest, Agriculture) is assigned a Band 6 emissivity value. The ratio image is then combined with the Band 6 image and atmospheric propagation data to generate the bottom right image. This image represents a display of data whose digital count can be directly related to estimated surface temperature. The resolution appears higher because the process cell is the size of the TM shortwave pixels.

  4. Stereo Imaging Miniature Endoscope with Single Imaging Chip and Conjugated Multi-Bandpass Filters

    NASA Technical Reports Server (NTRS)

    Shahinian, Hrayr Karnig (Inventor); Bae, Youngsam (Inventor); White, Victor E. (Inventor); Shcheglov, Kirill V. (Inventor); Manohara, Harish M. (Inventor); Kowalczyk, Robert S. (Inventor)

    2018-01-01

    A dual objective endoscope for insertion into a cavity of a body for providing a stereoscopic image of a region of interest inside of the body including an imaging device at the distal end for obtaining optical images of the region of interest (ROI), and processing the optical images for forming video signals for wired and/or wireless transmission and display of 3D images on a rendering device. The imaging device includes a focal plane detector array (FPA) for obtaining the optical images of the ROI, and processing circuits behind the FPA. The processing circuits convert the optical images into the video signals. The imaging device includes right and left pupil for receiving a right and left images through a right and left conjugated multi-band pass filters. Illuminators illuminate the ROI through a multi-band pass filter having three right and three left pass bands that are matched to the right and left conjugated multi-band pass filters. A full color image is collected after three or six sequential illuminations with the red, green and blue lights.

  5. Alpha-band rhythm modulation under the condition of subliminal face presentation: MEG study.

    PubMed

    Sakuraba, Satoshi; Kobayashi, Hana; Sakai, Shinya; Yokosawa, Koichi

    2013-01-01

    The human brain has two streams to process visual information: a dorsal stream and a ventral stream. Negative potential N170 or its magnetic counterpart M170 is known as the face-specific signal originating from the ventral stream. It is possible to present a visual image unconsciously by using continuous flash suppression (CFS), which is a visual masking technique adopting binocular rivalry. In this work, magnetoencephalograms were recorded during presentation of the three invisible images: face images, which are processed by the ventral stream; tool images, which could be processed by the dorsal stream, and a blank image. Alpha-band activities detected by sensors that are sensitive to M170 were compared. The alpha-band rhythm was suppressed more during presentation of face images than during presentation of the blank image (p=.028). The suppression remained for about 1 s after ending presentations. However, no significant difference was observed between tool and other images. These results suggest that alpha-band rhythm can be modulated also by unconscious visual images.

  6. An application of computer image-processing and filmy replica technique to the copper electroplating method of stress analysis

    NASA Astrophysics Data System (ADS)

    Sugiura, M.; Seika, M.

    1994-02-01

    In this study, a new technique to measure the density of slip-bands automatically is developed, namely, a TV image of the slip-bands observed through a microscope is directly processed by an image-processing system using a personal computer and an accurate value of the density of slip-bands is measured quickly. In the case of measuring the local stresses in machine parts of large size with the copper plating foil, the direct observation of slip-bands through an optical microscope is difficult. In this study, to facilitate a technique close to the direct microscopic observation of slip-bands in the foil attached to a large-sized specimen, the replica method using a platic film of acetyl cellulose is applied to replicate the slip-bands in the attached foil.

  7. Advanced Land Imager Assessment System

    NASA Technical Reports Server (NTRS)

    Chander, Gyanesh; Choate, Mike; Christopherson, Jon; Hollaren, Doug; Morfitt, Ron; Nelson, Jim; Nelson, Shar; Storey, James; Helder, Dennis; Ruggles, Tim; hide

    2008-01-01

    The Advanced Land Imager Assessment System (ALIAS) supports radiometric and geometric image processing for the Advanced Land Imager (ALI) instrument onboard NASA s Earth Observing-1 (EO-1) satellite. ALIAS consists of two processing subsystems for radiometric and geometric processing of the ALI s multispectral imagery. The radiometric processing subsystem characterizes and corrects, where possible, radiometric qualities including: coherent, impulse; and random noise; signal-to-noise ratios (SNRs); detector operability; gain; bias; saturation levels; striping and banding; and the stability of detector performance. The geometric processing subsystem and analysis capabilities support sensor alignment calibrations, sensor chip assembly (SCA)-to-SCA alignments and band-to-band alignment; and perform geodetic accuracy assessments, modulation transfer function (MTF) characterizations, and image-to-image characterizations. ALIAS also characterizes and corrects band-toband registration, and performs systematic precision and terrain correction of ALI images. This system can geometrically correct, and automatically mosaic, the SCA image strips into a seamless, map-projected image. This system provides a large database, which enables bulk trending for all ALI image data and significant instrument telemetry. Bulk trending consists of two functions: Housekeeping Processing and Bulk Radiometric Processing. The Housekeeping function pulls telemetry and temperature information from the instrument housekeeping files and writes this information to a database for trending. The Bulk Radiometric Processing function writes statistical information from the dark data acquired before and after the Earth imagery and the lamp data to the database for trending. This allows for multi-scene statistical analyses.

  8. Use of ERTS data for a multidisciplinary analysis of Michigan resources. [forests, agriculture, soils, and landforms

    NASA Technical Reports Server (NTRS)

    Andersen, A. L.; Myers, W. L.; Safir, G.; Whiteside, E. P. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. The results of this investigation of ratioing simulated ERTS spectral bands and several non-ERTS bands (all collected by an airborne multispectral scanner) indicate that significant terrain information is available from band-ratio images. Ratio images, which are based on the relative spectral changes which occur from one band to another, are useful for enhancing differences and aiding the image interpreter in identifying and mapping the distribution of such terrain elements as seedling crops, all bare soil, organic soil, mineral soil, forest and woodlots, and marsh areas. In addition, the ratio technique may be useful for computer processing to obtain recognition images of large areas at lower costs than with statistical decision rules. The results of this study of ratio processing of aircraft MSS data will be useful for future processing and evaluation of ERTS-1 data for soil and landform studies. Additionally, the results of ratioing spectral bands other than those currently collected by ERTS-1 suggests that some other bands (particularly a thermal band) would be useful in future satellites.

  9. Retrieval of land cover information under thin fog in Landsat TM image

    NASA Astrophysics Data System (ADS)

    Wei, Yuchun

    2008-04-01

    Thin fog, which often appears in remote sensing image of subtropical climate region, has resulted in the low image quantity and bad image mapping. Therefore, it is necessary to develop the image processing method to retrieve land cover information under thin fog. In this paper, the Landsat TM image near the Taihu Lake that is in the subtropical climate zone of China was used as an example, and the workflow and method used to retrieve the land cover information under thin fog have been built based on ENVI software and a single TM image. The basic step covers three parts: 1) isolating the thin fog area in image according to the spectral difference of different bands; 2) retrieving the visible band information of different land cover types under thin fog from the near-infrared bands according to the relationships between near-infrared bands and visible bands of different land cover types in the area without fog; 3) image post-process. The result showed that the method in the paper is easy and suitable, and can be used to improve the quantity of TM image mapping more effectively.

  10. Microscopy mineral image enhancement based on improved adaptive threshold in nonsubsampled shearlet transform domain

    NASA Astrophysics Data System (ADS)

    Li, Liangliang; Si, Yujuan; Jia, Zhenhong

    2018-03-01

    In this paper, a novel microscopy mineral image enhancement method based on adaptive threshold in non-subsampled shearlet transform (NSST) domain is proposed. First, the image is decomposed into one low-frequency sub-band and several high-frequency sub-bands. Second, the gamma correction is applied to process the low-frequency sub-band coefficients, and the improved adaptive threshold is adopted to suppress the noise of the high-frequency sub-bands coefficients. Third, the processed coefficients are reconstructed with the inverse NSST. Finally, the unsharp filter is used to enhance the details of the reconstructed image. Experimental results on various microscopy mineral images demonstrated that the proposed approach has a better enhancement effect in terms of objective metric and subjective metric.

  11. The design and application of a multi-band IR imager

    NASA Astrophysics Data System (ADS)

    Li, Lijuan

    2018-02-01

    Multi-band IR imaging system has many applications in security, national defense, petroleum and gas industry, etc. So the relevant technologies are getting more and more attention in rent years. As we know, when used in missile warning and missile seeker systems, multi-band IR imaging technology has the advantage of high target recognition capability and low false alarm rate if suitable spectral bands are selected. Compared with traditional single band IR imager, multi-band IR imager can make use of spectral features in addition to space and time domain features to discriminate target from background clutters and decoys. So, one of the key work is to select the right spectral bands in which the feature difference between target and false target is evident and is well utilized. Multi-band IR imager is a useful instrument to collect multi-band IR images of target, backgrounds and decoys for spectral band selection study at low cost and with adjustable parameters and property compared with commercial imaging spectrometer. In this paper, a multi-band IR imaging system is developed which is suitable to collect 4 spectral band images of various scenes at every turn and can be expanded to other short-wave and mid-wave IR spectral bands combination by changing filter groups. The multi-band IR imaging system consists of a broad band optical system, a cryogenic InSb large array detector, a spinning filter wheel and electronic processing system. The multi-band IR imaging system's performance is tested in real data collection experiments.

  12. Progressive Band Selection

    NASA Technical Reports Server (NTRS)

    Fisher, Kevin; Chang, Chein-I

    2009-01-01

    Progressive band selection (PBS) reduces spectral redundancy without significant loss of information, thereby reducing hyperspectral image data volume and processing time. Used onboard a spacecraft, it can also reduce image downlink time. PBS prioritizes an image's spectral bands according to priority scores that measure their significance to a specific application. Then it uses one of three methods to select an appropriate number of the most useful bands. Key challenges for PBS include selecting an appropriate criterion to generate band priority scores, and determining how many bands should be retained in the reduced image. The image's Virtual Dimensionality (VD), once computed, is a reasonable estimate of the latter. We describe the major design details of PBS and test PBS in a land classification experiment.

  13. Optimal processing for gel electrophoresis images: Applying Monte Carlo Tree Search in GelApp.

    PubMed

    Nguyen, Phi-Vu; Ghezal, Ali; Hsueh, Ya-Chih; Boudier, Thomas; Gan, Samuel Ken-En; Lee, Hwee Kuan

    2016-08-01

    In biomedical research, gel band size estimation in electrophoresis analysis is a routine process. To facilitate and automate this process, numerous software have been released, notably the GelApp mobile app. However, the band detection accuracy is limited due to a band detection algorithm that cannot adapt to the variations in input images. To address this, we used the Monte Carlo Tree Search with Upper Confidence Bound (MCTS-UCB) method to efficiently search for optimal image processing pipelines for the band detection task, thereby improving the segmentation algorithm. Incorporating this into GelApp, we report a significant enhancement of gel band detection accuracy by 55.9 ± 2.0% for protein polyacrylamide gels, and 35.9 ± 2.5% for DNA SYBR green agarose gels. This implementation is a proof-of-concept in demonstrating MCTS-UCB as a strategy to optimize general image segmentation. The improved version of GelApp-GelApp 2.0-is freely available on both Google Play Store (for Android platform), and Apple App Store (for iOS platform). © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Band co-registration modeling of LAPAN-A3/IPB multispectral imager based on satellite attitude

    NASA Astrophysics Data System (ADS)

    Hakim, P. R.; Syafrudin, A. H.; Utama, S.; Jayani, A. P. S.

    2018-05-01

    One of significant geometric distortion on images of LAPAN-A3/IPB multispectral imager is co-registration error between each color channel detector. Band co-registration distortion usually can be corrected by using several approaches, which are manual method, image matching algorithm, or sensor modeling and calibration approach. This paper develops another approach to minimize band co-registration distortion on LAPAN-A3/IPB multispectral image by using supervised modeling of image matching with respect to satellite attitude. Modeling results show that band co-registration error in across-track axis is strongly influenced by yaw angle, while error in along-track axis is fairly influenced by both pitch and roll angle. Accuracy of the models obtained is pretty good, which lies between 1-3 pixels error for each axis of each pair of band co-registration. This mean that the model can be used to correct the distorted images without the need of slower image matching algorithm, nor the laborious effort needed in manual approach and sensor calibration. Since the calculation can be executed in order of seconds, this approach can be used in real time quick-look image processing in ground station or even in satellite on-board image processing.

  15. LANDSAT-4 image data quality analysis for energy related applications. [nuclear power plant sites

    NASA Technical Reports Server (NTRS)

    Wukelic, G. E. (Principal Investigator)

    1983-01-01

    No useable LANDSAT 4 TM data were obtained for the Hanford site in the Columbia Plateau region, but TM simulator data for a Virginia Electric Company nuclear power plant was used to test image processing algorithms. Principal component analyses of this data set clearly indicated that thermal plumes in surface waters used for reactor cooling would be discrenible. Image processing and analysis programs were successfully testing using the 7 band Arkansas test scene and preliminary analysis of TM data for the Savanah River Plant shows that current interactive, image enhancement, analysis and integration techniques can be effectively used for LANDSAT 4 data. Thermal band data appear adequate for gross estimates of thermal changes occurring near operating nuclear facilities especially in surface water bodies being used for reactor cooling purposes. Additional image processing software was written and tested which provides for more rapid and effective analysis of the 7 band TM data.

  16. A comparison of performance of automatic cloud coverage assessment algorithm for Formosat-2 image using clustering-based and spatial thresholding methods

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Hsien

    2012-11-01

    Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.

  17. Demosaicking for full motion video 9-band SWIR sensor

    NASA Astrophysics Data System (ADS)

    Kanaev, Andrey V.; Rawhouser, Marjorie; Kutteruf, Mary R.; Yetzbacher, Michael K.; DePrenger, Michael J.; Novak, Kyle M.; Miller, Corey A.; Miller, Christopher W.

    2014-05-01

    Short wave infrared (SWIR) spectral imaging systems are vital for Intelligence, Surveillance, and Reconnaissance (ISR) applications because of their abilities to autonomously detect targets and classify materials. Typically the spectral imagers are incapable of providing Full Motion Video (FMV) because of their reliance on line scanning. We enable FMV capability for a SWIR multi-spectral camera by creating a repeating pattern of 3x3 spectral filters on a staring focal plane array (FPA). In this paper we present the imagery from an FMV SWIR camera with nine discrete bands and discuss image processing algorithms necessary for its operation. The main task of image processing in this case is demosaicking of the spectral bands i.e. reconstructing full spectral images with original FPA resolution from spatially subsampled and incomplete spectral data acquired with the choice of filter array pattern. To the best of author's knowledge, the demosaicking algorithms for nine or more equally sampled bands have not been reported before. Moreover all existing algorithms developed for demosaicking visible color filter arrays with less than nine colors assume either certain relationship between the visible colors, which are not valid for SWIR imaging, or presence of one color band with higher sampling rate compared to the rest of the bands, which does not conform to our spectral filter pattern. We will discuss and present results for two novel approaches to demosaicking: interpolation using multi-band edge information and application of multi-frame super-resolution to a single frame resolution enhancement of multi-spectral spatially multiplexed images.

  18. Wide-band gas leak imaging detection system using UFPA

    NASA Astrophysics Data System (ADS)

    Jin, Wei-qi; Li, Jia-kun; Dun, Xiong; Jin, Minglei; Wang, Xia

    2014-11-01

    The leakage of toxic or hazardous gases not only pollutes the environment, but also threatens people's lives and property safety. Many countries attach great importance to the rapid and effective gas leak detection technology and instrument development. However, the gas leak imaging detection systems currently existing are generally limited to a narrow-band in Medium Wavelength Infrared (MWIR) or Long Wavelength Infrared (LWIR) cooled focal plane imaging, which is difficult to detect the common kinds of the leaking gases. Besides the costly cooled focal plane array is utilized, the application promotion is severely limited. To address this issue, a wide-band gas leak IR imaging detection system using Uncooled Focal Plane Array (UFPA) detector is proposed, which is composed of wide-band IR optical lens, sub-band filters and switching device, wide-band UFPA detector, video processing and system control circuit. A wide-band (3µm~12µm) UFPA detector is obtained by replacing the protection window and optimizing the structural parameters of the detector. A large relative aperture (F#=0.75) wide-band (3μm~12μm) multispectral IR lens is developed by using the focus compensation method, which combining the thickness of the narrow-band filters. The gas leak IR image quality and the detection sensitivity are improved by using the IR image Non-Uniformity Correction (NUC) technology and Digital Detail Enhancement (DDE) technology. The wide-band gas leak IR imaging detection system using UFPA detector takes full advantage of the wide-band (MWIR&LWIR) response characteristic of the UFPA detector and the digital image processing technology to provide the resulting gas leak video easy to be observed for the human eyes. Many kinds of gases, which are not visible to the naked eyes, can be sensitively detected and visualized. The designed system has many commendable advantages, such as scanning a wide range simultaneously, locating the leaking source quickly, visualizing the gas plume intuitively and so on. The simulation experiment shows that the gas IR imaging detection has great advantages and widely promotion space compared with the traditional techniques, such as point-contact or line-contactless detection.

  19. Use of feature extraction techniques for the texture and context information in ERTS imagery: Spectral and textural processing of ERTS imagery. [classification of Kansas land use

    NASA Technical Reports Server (NTRS)

    Haralick, R. H. (Principal Investigator); Bosley, R. J.

    1974-01-01

    The author has identified the following significant results. A procedure was developed to extract cross-band textural features from ERTS MSS imagery. Evolving from a single image texture extraction procedure which uses spatial dependence matrices to measure relative co-occurrence of nearest neighbor grey tones, the cross-band texture procedure uses the distribution of neighboring grey tone N-tuple differences to measure the spatial interrelationships, or co-occurrences, of the grey tone N-tuples present in a texture pattern. In both procedures, texture is characterized in such a way as to be invariant under linear grey tone transformations. However, the cross-band procedure complements the single image procedure by extracting texture information and spectral information contained in ERTS multi-images. Classification experiments show that when used alone, without spectral processing, the cross-band texture procedure extracts more information than the single image texture analysis. Results show an improvement in average correct classification from 86.2% to 88.8% for ERTS image no. 1021-16333 with the cross-band texture procedure. However, when used together with spectral features, the single image texture plus spectral features perform better than the cross-band texture plus spectral features, with an average correct classification of 93.8% and 91.6%, respectively.

  20. Evaluation of photographic enhancements of Landsat imagery

    NASA Technical Reports Server (NTRS)

    Dean, K. G.; Spencer, J. P.

    1982-01-01

    The photographic processing of color Landsat imagery was evaluated to determine the optimal enhancement techniques. Twenty-six images were examined to explore the effects of gamma values upon image interpretation in a subarctic environment. Gamma values were varied on the images of bands 4 through 7 prior to the creation of the color composites. This yielded color-composited images with various color balances. These images were evaluated in terms of visible geological features (drainage, lineaments, landforms, etc.) and landcover features (exposed rock and ground, coniferous forest, etc.). The results indicate that the most informative images are created by using gamma values of 2.0 for band 4, 1.0 for band 5, and 2.0 for band 6 or 7. Other photographic enhancements tend to enhance some features at the expense of others.

  1. A natural-color mapping for single-band night-time image based on FPGA

    NASA Astrophysics Data System (ADS)

    Wang, Yilun; Qian, Yunsheng

    2018-01-01

    A natural-color mapping for single-band night-time image method based on FPGA can transmit the color of the reference image to single-band night-time image, which is consistent with human visual habits and can help observers identify the target. This paper introduces the processing of the natural-color mapping algorithm based on FPGA. Firstly, the image can be transformed based on histogram equalization, and the intensity features and standard deviation features of reference image are stored in SRAM. Then, the real-time digital images' intensity features and standard deviation features are calculated by FPGA. At last, FPGA completes the color mapping through matching pixels between images using the features in luminance channel.

  2. Development of cataclastic foliation in deformation bands in feldspar-rich conglomerates of the Rio do Peixe Basin, NE Brazil

    NASA Astrophysics Data System (ADS)

    Nicchio, Matheus A.; Nogueira, Francisco C. C.; Balsamo, Fabrizio; Souza, Jorge A. B.; Carvalho, Bruno R. B. M.; Bezerra, Francisco H. R.

    2018-02-01

    In this work we describe the deformation mechanisms and processes that occurred during the evolution of cataclastic deformation bands developed in the feldspar-rich conglomerates of the Rio do Peixe Basin, NE Brazil. We studied bands with different deformation intensities, ranging from single cm-thick tabular bands to more evolved clustering zones. The chemical identification of cataclastic material within deformation bands was performed using compositional mapping in SEM images, EDX and XRD analyses. Deformation processes were identified by microstructural analysis and by the quantification of comminution intensity, performed using digital image processing. The deformation bands are internally non homogeneous and developed during five evolutionary stages: (1) moderate grain size reduction, grain rotation and grain border comminution; (2) intense grain size reduction with preferential feldspar fragmentation; (3) formation of subparallel C-type slip zones; (4) formation of S-type structures, generating S-C-like fabric; and (5) formation of C‧-type slip zones, generating well-developed foliation that resembles S-C-C‧-type structures in a ductile environment. Such deformation fabric is mostly imparted by the preferential alignment of intensely comminuted feldspar fragments along thin slip zones developed within deformation bands. These processes were purely mechanical (i.e., grain crushing and reorientation). No clays or fluids were involved in such processes.

  3. Synthesis of Multispectral Bands from Hyperspectral Data: Validation Based on Images Acquired by AVIRIS, Hyperion, ALI, and ETM+

    NASA Technical Reports Server (NTRS)

    Blonski, Slawomir; Glasser, Gerald; Russell, Jeffrey; Ryan, Robert; Terrie, Greg; Zanoni, Vicki

    2003-01-01

    Spectral band synthesis is a key step in the process of creating a simulated multispectral image from hyperspectral data. In this step, narrow hyperspectral bands are combined into broader multispectral bands. Such an approach has been used quite often, but to the best of our knowledge accuracy of the band synthesis simulations has not been evaluated thus far. Therefore, the main goal of this paper is to provide validation of the spectral band synthesis algorithm used in the ART software. The next section contains a description of the algorithm and an example of its application. Using spectral responses of AVIRIS, Hyperion, ALI, and ETM+, the following section shows how the synthesized spectral bands compare with actual bands, and it presents an evaluation of the simulation accuracy based on results of MODTRAN modeling. In the final sections of the paper, simulated images are compared with data acquired by actual satellite sensors. First, a Landsat 7 ETM+ image is simulated using an AVIRIS hyperspectral data cube. Then, two datasets collected with the Hyperion instrument from the EO-1 satellite are used to simulate multispectral images from the ALI and ETM+ sensors.

  4. GTG banding pattern on human metaphase chromosomes revealed by high resolution atomic-force microscopy.

    PubMed

    Thalhammer, S; Koehler, U; Stark, R W; Heckl, W M

    2001-06-01

    Surface topography of human metaphase chromosomes following GTG banding was examined using high resolution atomic force microscopy (AFM). Although using a completely different imaging mechanism, which is based on the mechanical interaction of a probe tip with the chromosome, the observed banding pattern is comparable to results from light microscopy and a karyotype of the AFM imaged metaphase spread can be generated. The AFM imaging process was performed on a normal 2n = 46, XX karyotype and on a 2n = 46, XY, t(2;15)(q23;q15) karyotype as an example of a translocation of chromosomal bands.

  5. Classification of visible and infrared hyperspectral images based on image segmentation and edge-preserving filtering

    NASA Astrophysics Data System (ADS)

    Cui, Binge; Ma, Xiudan; Xie, Xiaoyun; Ren, Guangbo; Ma, Yi

    2017-03-01

    The classification of hyperspectral images with a few labeled samples is a major challenge which is difficult to meet unless some spatial characteristics can be exploited. In this study, we proposed a novel spectral-spatial hyperspectral image classification method that exploited spatial autocorrelation of hyperspectral images. First, image segmentation is performed on the hyperspectral image to assign each pixel to a homogeneous region. Second, the visible and infrared bands of hyperspectral image are partitioned into multiple subsets of adjacent bands, and each subset is merged into one band. Recursive edge-preserving filtering is performed on each merged band which utilizes the spectral information of neighborhood pixels. Third, the resulting spectral and spatial feature band set is classified using the SVM classifier. Finally, bilateral filtering is performed to remove "salt-and-pepper" noise in the classification result. To preserve the spatial structure of hyperspectral image, edge-preserving filtering is applied independently before and after the classification process. Experimental results on different hyperspectral images prove that the proposed spectral-spatial classification approach is robust and offers more classification accuracy than state-of-the-art methods when the number of labeled samples is small.

  6. Characterization of radiometric calibration of LANDSAT-4 TM reflective bands

    NASA Technical Reports Server (NTRS)

    Barker, J. L.; Abrams, R. B.; Ball, D. L.; Leung, K. C.

    1984-01-01

    Prelaunch and postlaunch internal calibrator, image, and background data is to characterize the radiometric performance of the LANDSAT-4 TM and to recommend improved procedures for radiometric calibration. All but two channels (band 2, channel 4; band 5, channel 3) behave normally. Gain changes relative to a postlaunch reference for channels within a band vary within 0.5 percent as a group. Instrument gain for channels in the cold focal plane oscillates. Noise in background and image data ranges from 0.5 to 1.7 counts. Average differences in forward and reverse image data indicate a need for separate calibration processing of forward and reverse scans. Precision is improved by increasing the pulse integration width from 31 to 41 minor frames, depending on the band.

  7. Infrared spectroscopic imaging for noninvasive detection of latent fingerprints.

    PubMed

    Crane, Nicole J; Bartick, Edward G; Perlman, Rebecca Schwartz; Huffman, Scott

    2007-01-01

    The capability of Fourier transform infrared (FTIR) spectroscopic imaging to provide detailed images of unprocessed latent fingerprints while also preserving important trace evidence is demonstrated. Unprocessed fingerprints were developed on various porous and nonporous substrates. Data-processing methods used to extract the latent fingerprint ridge pattern from the background material included basic infrared spectroscopic band intensities, addition and subtraction of band intensity measurements, principal components analysis (PCA) and calculation of second derivative band intensities, as well as combinations of these various techniques. Additionally, trace evidence within the fingerprints was recovered and identified.

  8. Selective principal component regression analysis of fluorescence hyperspectral image to assess aflatoxin contamination in corn

    USDA-ARS?s Scientific Manuscript database

    Selective principal component regression analysis (SPCR) uses a subset of the original image bands for principal component transformation and regression. For optimal band selection before the transformation, this paper used genetic algorithms (GA). In this case, the GA process used the regression co...

  9. Documentation of procedures for textural/spatial pattern recognition techniques

    NASA Technical Reports Server (NTRS)

    Haralick, R. M.; Bryant, W. F.

    1976-01-01

    A C-130 aircraft was flown over the Sam Houston National Forest on March 21, 1973 at 10,000 feet altitude to collect multispectral scanner (MSS) data. Existing textural and spatial automatic processing techniques were used to classify the MSS imagery into specified timber categories. Several classification experiments were performed on this data using features selected from the spectral bands and a textural transform band. The results indicate that (1) spatial post-processing a classified image can cut the classification error to 1/2 or 1/3 of its initial value, (2) spatial post-processing the classified image using combined spectral and textural features produces a resulting image with less error than post-processing a classified image using only spectral features and (3) classification without spatial post processing using the combined spectral textural features tends to produce about the same error rate as a classification without spatial post processing using only spectral features.

  10. An integrtated approach to the use of Landsat TM data for gold exploration in west central Nevada

    NASA Technical Reports Server (NTRS)

    Mouat, D. A.; Myers, J. S.; Miller, N. L.

    1987-01-01

    This paper represents an integration of several Landsat TM image processing techniques with other data to discriminate the lithologies and associated areas of hydrothermal alteration in the vicinity of the Paradise Peak gold mine in west central Nevada. A microprocessor-based image processing system and an IDIMS system were used to analyze data from a 512 X 512 window of a Landsat-5 TM scene collected on June 30, 1984. Image processing techniques included simple band composites, band ratio composites, principal components composites, and baseline-based composites. These techniques were chosen based on their ability to discriminate the spectral characteristics of the products of hydrothermal alteration as well as of the associated regional lithologies. The simple band composite, ratio composite, two principal components composites, and the baseline-based composites separately can define the principal areas of alteration. Combined, they provide a very powerful exploration tool.

  11. Real-time hyperspectral imaging for food safety applications

    USDA-ARS?s Scientific Manuscript database

    Multispectral imaging systems with selected bands can commonly be used for real-time applications of food processing. Recent research has demonstrated several image processing methods including binning, noise removal filter, and appropriate morphological analysis in real-time mode can remove most fa...

  12. A novel multi-band SAR data technique for fully automatic oil spill detection in the ocean

    NASA Astrophysics Data System (ADS)

    Del Frate, Fabio; Latini, Daniele; Taravat, Alireza; Jones, Cathleen E.

    2013-10-01

    With the launch of the Italian constellation of small satellites for the Mediterranean basin observation COSMO-SkyMed and the German TerraSAR-X missions, the delivery of very high-resolution SAR data to observe the Earth day or night has remarkably increased. In particular, also taking into account other ongoing missions such as Radarsat or those no longer working such as ALOS PALSAR, ERS-SAR and ENVISAT the amount of information, at different bands, available for users interested in oil spill analysis has become highly massive. Moreover, future SAR missions such as Sentinel-1 are scheduled for launch in the very next years while additional support can be provided by Uninhabited Aerial Vehicle (UAV) SAR systems. Considering the opportunity represented by all these missions, the challenge is to find suitable and adequate image processing multi-band procedures able to fully exploit the huge amount of data available. In this paper we present a new fast, robust and effective automated approach for oil-spill monitoring starting from data collected at different bands, polarizations and spatial resolutions. A combination of Weibull Multiplicative Model (WMM), Pulse Coupled Neural Network (PCNN) and Multi-Layer Perceptron (MLP) techniques is proposed for achieving the aforementioned goals. One of the most innovative ideas is to separate the dark spot detection process into two main steps, WMM enhancement and PCNN segmentation. The complete processing chain has been applied to a data set containing C-band (ERS-SAR, ENVISAT ASAR), X-band images (Cosmo-SkyMed and TerraSAR-X) and L-band images (UAVSAR) for an overall number of more than 200 images considered.

  13. The Dark Energy Survey Image Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Morganson, E.; Gruendl, R. A.; Menanteau, F.; Carrasco Kind, M.; Chen, Y.-C.; Daues, G.; Drlica-Wagner, A.; Friedel, D. N.; Gower, M.; Johnson, M. W. G.; Johnson, M. D.; Kessler, R.; Paz-Chinchón, F.; Petravick, D.; Pond, C.; Yanny, B.; Allam, S.; Armstrong, R.; Barkhouse, W.; Bechtol, K.; Benoit-Lévy, A.; Bernstein, G. M.; Bertin, E.; Buckley-Geer, E.; Covarrubias, R.; Desai, S.; Diehl, H. T.; Goldstein, D. A.; Gruen, D.; Li, T. S.; Lin, H.; Marriner, J.; Mohr, J. J.; Neilsen, E.; Ngeow, C.-C.; Paech, K.; Rykoff, E. S.; Sako, M.; Sevilla-Noarbe, I.; Sheldon, E.; Sobreira, F.; Tucker, D. L.; Wester, W.; DES Collaboration

    2018-07-01

    The Dark Energy Survey (DES) is a five-year optical imaging campaign with the goal of understanding the origin of cosmic acceleration. DES performs a ∼5000 deg2 survey of the southern sky in five optical bands (g, r, i, z, Y) to a depth of ∼24th magnitude. Contemporaneously, DES performs a deep, time-domain survey in four optical bands (g, r, i, z) over ∼27 deg2. DES exposures are processed nightly with an evolving data reduction pipeline and evaluated for image quality to determine if they need to be retaken. Difference imaging and transient source detection are also performed in the time domain component nightly. On a bi-annual basis, DES exposures are reprocessed with a refined pipeline and coadded to maximize imaging depth. Here we describe the DES image processing pipeline in support of DES science, as a reference for users of archival DES data, and as a guide for future astronomical surveys.

  14. Computer-based image analysis of one-dimensional electrophoretic gels used for the separation of DNA restriction fragments.

    PubMed Central

    Gray, A J; Beecher, D E; Olson, M V

    1984-01-01

    A stand-alone, interactive computer system has been developed that automates the analysis of ethidium bromide-stained agarose and acrylamide gels on which DNA restriction fragments have been separated by size. High-resolution digital images of the gels are obtained using a camera that contains a one-dimensional, 2048-pixel photodiode array that is mechanically translated through 2048 discrete steps in a direction perpendicular to the gel lanes. An automatic band-detection algorithm is used to establish the positions of the gel bands. A color-video graphics system, on which both the gel image and a variety of operator-controlled overlays are displayed, allows the operator to visualize and interact with critical stages of the analysis. The principal interactive steps involve defining the regions of the image that are to be analyzed and editing the results of the band-detection process. The system produces a machine-readable output file that contains the positions, intensities, and descriptive classifications of all the bands, as well as documentary information about the experiment. This file is normally further processed on a larger computer to obtain fragment-size assignments. Images PMID:6320097

  15. Multispectral image sharpening using a shift-invariant wavelet transform and adaptive processing of multiresolution edges

    USGS Publications Warehouse

    Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.

    2002-01-01

    Enhanced false color images from mid-IR, near-IR (NIR), and visible bands of the Landsat thematic mapper (TM) are commonly used for visually interpreting land cover type. Described here is a technique for sharpening or fusion of NIR with higher resolution panchromatic (Pan) that uses a shift-invariant implementation of the discrete wavelet transform (SIDWT) and a reported pixel-based selection rule to combine coefficients. There can be contrast reversals (e.g., at soil-vegetation boundaries between NIR and visible band images) and consequently degraded sharpening and edge artifacts. To improve performance for these conditions, I used a local area-based correlation technique originally reported for comparing image-pyramid-derived edges for the adaptive processing of wavelet-derived edge data. Also, using the redundant data of the SIDWT improves edge data generation. There is additional improvement because sharpened subband imagery is used with the edge-correlation process. A reported technique for sharpening three-band spectral imagery used forward and inverse intensity, hue, and saturation transforms and wavelet-based sharpening of intensity. This technique had limitations with opposite contrast data, and in this study sharpening was applied to single-band multispectral-Pan image pairs. Sharpening used simulated 30-m NIR imagery produced by degrading the spatial resolution of a higher resolution reference. Performance, evaluated by comparison between sharpened and reference image, was improved when sharpened subband data were used with the edge correlation.

  16. Space Radar Image of Kilauea, Hawaii - interferometry 1

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This X-band image of the volcano Kilauea was taken on October 4, 1994, by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar. The area shown is about 9 kilometers by 13 kilometers (5.5 miles by 8 miles) and is centered at about 19.58 degrees north latitude and 155.55 degrees west longitude. This image and a similar image taken during the first flight of the radar instrument on April 13, 1994 were combined to produce the topographic information by means of an interferometric process. This is a process by which radar data acquired on different passes of the space shuttle is overlaid to obtain elevation information. Three additional images are provided showing an overlay of radar data with interferometric fringes; a three-dimensional image based on altitude lines; and, finally, a topographic view of the region. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.V.(DLR), the major partner in science, operations and data processing of X-SAR. The Instituto Ricerca Elettromagnetismo Componenti Elettronici (IRECE) at the University of Naples was a partner in interferometry analysis.

  17. SAR studies in the Yuma Desert, Arizona: Sand penetration, geology, and the detection of military ordnance debris

    USGS Publications Warehouse

    Schaber, G.G.

    1999-01-01

    Synthetic Aperture Radar (SAR) images acquired over part of the Yuma Desert in southwestern Arizona demonstrate the ability of C-band (5.7-cm wavelength), L-band (24.5 cm), and P-band (68 cm) AIRSAR signals to backscatter from increasingly greater depths reaching several meters in blow sand and sandy alluvium. AIRSAR images obtained within the Barry M. Goldwater Bombing and Gunnery Range near Yuma, Arizona, show a total reversal of C- and P-band backscatter contrast (image tone) for three distinct geologic units. This phenomenon results from an increasingly greater depth of radar imaging with increasing radar wavelength. In the case of sandy- and small pebble-alluvium surfaces mantled by up to several meters of blow sand, backscatter increases directly with SAR wavelength as a result of volume scattering from a calcic soil horizon at shallow depth and by volume scattering from the root mounds of healthy desert vegetation that locally stabilize blow sand. AIRSAR images obtained within the military range are also shown to be useful for detecting metallic military ordnance debris that is located either at the surface or covered by tens of centimeters to several meters of blow sand. The degree of detectability of this ordnance increases with SAR wavelength and is clearly maximized on P-band images that are processed in the cross-polarized mode (HV). This effect is attributed to maximum signal penetration at P-band and the enhanced PHV image contrast between the radar-bright ordnance debris and the radar-dark sandy desert. This article focuses on the interpretation of high resolution AIRSAR images but also Compares these airborne SAR images with those acquired from spacecraft sensors such as ERS-SAR and Space Radar Laboratory (SIR-C/X-SAR).Synthetic Aperture Radar (SAR) images acquired over part of the Yuma Desert in southwestern Arizona demonstrate the ability of C-band (5.7-cm wavelength), L-band (24.5 cm), and P-band (68 cm) AIRSAR signals to backscatter from increasingly greater depths reaching several meters in blow sand and sandy alluvium. AIRSAR images obtained within the Barry M. Goldwater Bombing and Gunnery Range near Yuma, Arizona, show a total reversal of C- and P-band backscatter contrast (image tone) for three distinct geologic units. This phenomenon results from an increasingly greater depth of radar imaging with increasing radar wavelength. In the case of sandy- and small pebble-alluvium surfaces mantled by up to several meters of blow sand, backscatter increases directly with SAR wavelength as a result of volume scattering from a calcic soil horizon at shallow depth and by volume scattering from the root mounds of healthy desert vegetation that locally stabilize blow sand. AIRSAR images obtained within the military range are also shown to be useful for detecting metallic military ordnance debris that is located either at the surface or covered by tens of centimeters to several meters of blow sand. The degree of detectability of this ordnance increases with SAR wavelength and is clearly maximized on P-band images that are processed in the cross-polarized mode (HV). This effect is attributed to maximum signal penetration at P-band and the enhanced PHV image contrast between the radar-bright ordnance debris and the radar-dark sandy desert. This article focuses on the interpretation of high resolution AIRSAR images but also compares these airborne SAR images with those acquired from spacecraft sensors such as ERS-SAR and Space Radar Laboratory (SIR-C/X-SAR).

  18. Landsat Thematic Mapper Image Mosaic of Colorado

    USGS Publications Warehouse

    Cole, Christopher J.; Noble, Suzanne M.; Blauer, Steven L.; Friesen, Beverly A.; Bauer, Mark A.

    2010-01-01

    The U.S. Geological Survey (USGS) Rocky Mountain Geographic Science Center (RMGSC) produced a seamless, cloud-minimized remotely-sensed image spanning the State of Colorado. Multiple orthorectified Landsat 5 Thematic Mapper (TM) scenes collected during 2006-2008 were spectrally normalized via reflectance transformation and linear regression based upon pseudo-invariant features (PIFS) following the removal of clouds. Individual Landsat scenes were then mosaicked to form a six-band image composite spanning the visible to shortwave infrared spectrum. This image mosaic, presented here, will also be used to create a conifer health classification for Colorado in Scientific Investigations Map 3103. An archive of past and current Landsat imagery exists and is available to the scientific community (http://glovis.usgs.gov/), but significant pre-processing was required to produce a statewide mosaic from this information. Much of the data contained perennial cloud cover that complicated analysis and classification efforts. Existing Landsat mosaic products, typically three band image composites, did not include the full suite of multispectral information necessary to produce this assessment, and were derived using data collected in 2001 or earlier. A six-band image mosaic covering Colorado was produced. This mosaic includes blue (band 1), green (band 2), red (band 3), near infrared (band 4), and shortwave infrared information (bands 5 and 7). The image composite shown here displays three of the Landsat bands (7, 4, and 2), which are sensitive to the shortwave infrared, near infrared, and green ranges of the electromagnetic spectrum. Vegetation appears green in this image, while water looks black, and unforested areas appear pink. The lines that may be visible in the on-screen version of the PDF are an artifact of the export methods used to create this file. The file should be viewed at 150 percent zoom or greater for optimum viewing.

  19. Decorrelation of L-band and C-band interferometry to volcanic risk prevention

    NASA Astrophysics Data System (ADS)

    Malinverni, E. S.; Sandwell, D.; Tassetti, A. N.; Cappelletti, L.

    2013-10-01

    SAR has several strong key features: fine spatial resolution/precision and high temporal pass frequency. Moreover, the InSAR technique allows the accurate detection of ground deformations. This high potential technology can be invaluable to study volcanoes: it provides important information on pre-eruption surface deformation, improving the understanding of volcanic processes and the ability to predict eruptions. As a downside, SAR measurements are influenced by artifacts such as atmospheric effects or bad topographic data. Correlation gives a measure of these interferences, quantifying the similarity of the phase of two SAR images. Different approaches exists to reduce these errors but the main concern remain the possibility to correlate images with different acquisition times: snow-covered or heavily-vegetated areas produce seasonal changes on the surface. Minimizing the time between passes partly limits decorrelation. Though, images with a short temporal baseline aren't always available and some artifacts affecting correlation are timeindependent. This work studies correlation of pairs of SAR images focusing on the influence of surface and climate conditions, especially snow coverage and temperature. Furthermore, the effects of the acquisition band on correlation are taken into account, comparing L-band and C-band images. All the chosen images cover most of the Yellowstone caldera (USA) over a span of 4 years, sampling all the seasons. Interferograms and correlation maps are generated. To isolate temporal decorrelation, pairs of images with the shortest baseline are chosen. Correlation maps are analyzed in relation to snow depth and temperature. Results obtained with ENVISAT and ERS satellites (C-band) are compared with the ones from ALOS (L-band). Results show a good performance during winter and a bad attitude towards wet snow (spring and fall). During summer both L-band and C-band maintain a good coherence with L-band performing better over vegetation.

  20. White-Light Optical Information Processing and Holography.

    DTIC Science & Technology

    1982-05-03

    artifact noise . I. wever, the deblurring spatial filter that we used were a narrow spectral band centered at 5154A green light. To compensate for the scaling...Processing, White-Light 11olographyv, Image Profcessing, Optical Signal Process inI, Image Subtraction, Image Deblurring . 70. A S’ R ACT (Continua on crow ad...optical processing technique, we had shown that the incoherent source techniques provides better image quality, and very low coherent artifact noise

  1. Principal components colour display of ERTS imagery

    NASA Technical Reports Server (NTRS)

    Taylor, M. M.

    1974-01-01

    In the technique presented, colours are not derived from single bands, but rather from independent linear combinations of the bands. Using a simple model of the processing done by the visual system, three informationally independent linear combinations of the four ERTS bands are mapped onto the three visual colour dimensions of brightness, redness-greenness and blueness-yellowness. The technique permits user-specific transformations which enhance particular features, but this is not usually needed, since a single transformation provides a picture which conveys much of the information implicit in the ERTS data. Examples of experimental vector images with matched individual band images are shown.

  2. VizieR Online Data Catalog: Palomar Transient Factory photometric observations (Arcavi+, 2014)

    NASA Astrophysics Data System (ADS)

    Arcavi, I.; Gal-Yam, A.; Sullivan, M.; Pan, Y.-C.; Cenko, S. B.; Horesh, A.; Ofek, E. O.; De Cia, A.; Yan, L.; Yang, C.-W.; Howell, D. A.; Tal, D.; Kulkarni, S. R.; Tendulkar, S. P.; Tang, S.; Xu, D.; Sternberg, A.; Cohen, J. G.; Bloom, J. S.; Nugent, P. E.; Kasliwal, M. M.; Perley, D. A.; Quimby, R. M.; Miller, A. A.; Theissen, C. A.; Laher, R. R.

    2017-04-01

    All the events from our archival search were discovered by the Palomar 48 inch Oschin Schmidt Telescope (P48) as part of the PTF survey using the Mould R-band filter. We obtained photometric observations in the R and g bands using P48, and in g, r, and i bands with the Palomar 60 inch telescope (P60; Cenko et al. 2006PASP..118.1396C). Initial processing of the P48 images was conducted by the Infrared Processing and Analysis Center (IPAC; Laher et al. 2014PASP..126..674L). Photometry was extracted using a custom PSF fitting routine (e.g., Sullivan et al. 2006AJ....131..960S), which measures the transient flux after image subtraction (using template images taken before the outburst or long after it faded). (1 data file).

  3. Detection of dual-band infrared small target based on joint dynamic sparse representation

    NASA Astrophysics Data System (ADS)

    Zhou, Jinwei; Li, Jicheng; Shi, Zhiguang; Lu, Xiaowei; Ren, Dongwei

    2015-10-01

    Infrared small target detection is a crucial and yet still is a difficult issue in aeronautic and astronautic applications. Sparse representation is an important mathematic tool and has been used extensively in image processing in recent years. Joint sparse representation is applied in dual-band infrared dim target detection in this paper. Firstly, according to the characters of dim targets in dual-band infrared images, 2-dimension Gaussian intensity model was used to construct target dictionary, then the dictionary was classified into different sub-classes according to different positions of Gaussian function's center point in image block; The fact that dual-band small targets detection can use the same dictionary and the sparsity doesn't lie in atom-level but in sub-class level was utilized, hence the detection of targets in dual-band infrared images was converted to be a joint dynamic sparse representation problem. And the dynamic active sets were used to describe the sparse constraint of coefficients. Two modified sparsity concentration index (SCI) criteria was proposed to evaluate whether targets exist in the images. In experiments, it shows that the proposed algorithm can achieve better detecting performance and dual-band detection is much more robust to noise compared with single-band detection. Moreover, the proposed method can be expanded to multi-spectrum small target detection.

  4. Robust and adaptive band-to-band image transform of UAS miniature multi-lens multispectral camera

    NASA Astrophysics Data System (ADS)

    Jhan, Jyun-Ping; Rau, Jiann-Yeou; Haala, Norbert

    2018-03-01

    Utilizing miniature multispectral (MS) or hyperspectral (HS) cameras by mounting them on an Unmanned Aerial System (UAS) has the benefits of convenience and flexibility to collect remote sensing imagery for precision agriculture, vegetation monitoring, and environment investigation applications. Most miniature MS cameras adopt a multi-lens structure to record discrete MS bands of visible and invisible information. The differences in lens distortion, mounting positions, and viewing angles among lenses mean that the acquired original MS images have significant band misregistration errors. We have developed a Robust and Adaptive Band-to-Band Image Transform (RABBIT) method for dealing with the band co-registration of various types of miniature multi-lens multispectral cameras (Mini-MSCs) to obtain band co-registered MS imagery for remote sensing applications. The RABBIT utilizes modified projective transformation (MPT) to transfer the multiple image geometry of a multi-lens imaging system to one sensor geometry, and combines this with a robust and adaptive correction (RAC) procedure to correct several systematic errors and to obtain sub-pixel accuracy. This study applies three state-of-the-art Mini-MSCs to evaluate the RABBIT method's performance, specifically the Tetracam Miniature Multiple Camera Array (MiniMCA), Micasense RedEdge, and Parrot Sequoia. Six MS datasets acquired at different target distances and dates, and locations are also applied to prove its reliability and applicability. Results prove that RABBIT is feasible for different types of Mini-MSCs with accurate, robust, and rapid image processing efficiency.

  5. Plant Chlorophyll Content Imager with Reference Detection Signals

    NASA Technical Reports Server (NTRS)

    Spiering, Bruce A. (Inventor); Carter, Gregory A. (Inventor)

    2000-01-01

    A portable plant chlorophyll imaging system is described which collects light reflected from a target plant and separates the collected light into two different wavelength bands. These wavelength bands, or channels, are described as having center wavelengths of 700 nm and 840 nm. The light collected in these two channels is processed using synchronized video cameras. A controller provided in the system compares the level of light of video images reflected from a target plant with a reference level of light from a source illuminating the plant. The percent of reflection in the two separate wavelength bands from a target plant are compared to provide a ratio video image which indicates a relative level of plant chlorophyll content and physiological stress. Multiple display modes are described for viewing the video images.

  6. Automated calibration of the Suomi National Polar-Orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) reflective solar bands

    NASA Astrophysics Data System (ADS)

    Rausch, Kameron; Houchin, Scott; Cardema, Jason; Moy, Gabriel; Haas, Evan; De Luccia, Frank J.

    2013-12-01

    National Polar-Orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) reflective bands are currently calibrated via weekly updates to look-up tables (LUTs) utilized by operational ground processing in the Joint Polar Satellite System Interface Data Processing Segment (IDPS). The parameters in these LUTs must be predicted ahead 2 weeks and cannot adequately track the dynamically varying response characteristics of the instrument. As a result, spurious "predict-ahead" calibration errors of the order of 0.1% or greater are routinely introduced into the calibrated reflectances and radiances produced by IDPS in sensor data records (SDRs). Spurious calibration errors of this magnitude adversely impact the quality of downstream environmental data records (EDRs) derived from VIIRS SDRs such as Ocean Color/Chlorophyll and cause increased striping and band-to-band radiometric calibration uncertainty of SDR products. A novel algorithm that fully automates reflective band calibration has been developed for implementation in IDPS in late 2013. Automating the reflective solar band (RSB) calibration is extremely challenging and represents a significant advancement over the manner in which RSB calibration has traditionally been performed in heritage instruments such as the Moderate Resolution Imaging Spectroradiometer. The automated algorithm applies calibration data almost immediately after their acquisition by the instrument from views of space and on-onboard calibration sources, thereby eliminating the predict-ahead errors associated with the current offline calibration process. This new algorithm, when implemented, will significantly improve the quality of VIIRS reflective band SDRs and consequently the quality of EDRs produced from these SDRs.

  7. Spectrally-encoded color imaging

    PubMed Central

    Kang, DongKyun; Yelin, Dvir; Bouma, Brett E.; Tearney, Guillermo J.

    2010-01-01

    Spectrally-encoded endoscopy (SEE) is a technique for ultraminiature endoscopy that encodes each spatial location on the sample with a different wavelength. One limitation of previous incarnations of SEE is that it inherently creates monochromatic images, since the spectral bandwidth is expended in the spatial encoding process. Here we present a spectrally-encoded imaging system that has color imaging capability. The new imaging system utilizes three distinct red, green, and blue spectral bands that are configured to illuminate the grating at different incident angles. By careful selection of the incident angles, the three spectral bands can be made to overlap on the sample. To demonstrate the method, a bench-top system was built, comprising a 2400-lpmm grating illuminated by three 525-μm-diameter beams with three different spectral bands. Each spectral band had a bandwidth of 75 nm, producing 189 resolvable points. A resolution target, color phantoms, and excised swine small intestine were imaged to validate the system's performance. The color SEE system showed qualitatively and quantitatively similar color imaging performance to that of a conventional digital camera. PMID:19688002

  8. Space Radar Image of Kilauea, Hawaii - Interferometry 1

    NASA Image and Video Library

    1999-05-01

    This X-band image of the volcano Kilauea was taken on October 4, 1994, by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar. The area shown is about 9 kilometers by 13 kilometers (5.5 miles by 8 miles) and is centered at about 19.58 degrees north latitude and 155.55 degrees west longitude. This image and a similar image taken during the first flight of the radar instrument on April 13, 1994 were combined to produce the topographic information by means of an interferometric process. This is a process by which radar data acquired on different passes of the space shuttle is overlaid to obtain elevation information. Three additional images are provided showing an overlay of radar data with interferometric fringes; a three-dimensional image based on altitude lines; and, finally, a topographic view of the region. http://photojournal.jpl.nasa.gov/catalog/PIA01763

  9. Space Radar Image of Mammoth Mountain, California

    NASA Image and Video Library

    1999-05-01

    This false-color composite radar image of the Mammoth Mountain area in the Sierra Nevada Mountains, California, was acquired by the Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar aboard the space shuttle Endeavour on its 67th orbit on October 3, 1994. The image is centered at 37.6 degrees north latitude and 119.0 degrees west longitude. The area is about 39 kilometers by 51 kilometers (24 miles by 31 miles). North is toward the bottom, about 45 degrees to the right. In this image, red was created using L-band (horizontally transmitted/vertically received) polarization data; green was created using C-band (horizontally transmitted/vertically received) polarization data; and blue was created using C-band (horizontally transmitted and received) polarization data. Crawley Lake appears dark at the center left of the image, just above or south of Long Valley. The Mammoth Mountain ski area is visible at the top right of the scene. The red areas correspond to forests, the dark blue areas are bare surfaces and the green areas are short vegetation, mainly brush. The purple areas at the higher elevations in the upper part of the scene are discontinuous patches of snow cover from a September 28 storm. New, very thin snow was falling before and during the second space shuttle pass. In parallel with the operational SIR-C data processing, an experimental effort is being conducted to test SAR data processing using the Jet Propulsion Laboratory's massively parallel supercomputing facility, centered around the Cray Research T3D. These experiments will assess the abilities of large supercomputers to produce high throughput Synthetic Aperture Radar processing in preparation for upcoming data-intensive SAR missions. The image released here was produced as part of this experimental effort. http://photojournal.jpl.nasa.gov/catalog/PIA01746

  10. Fast algorithms of constrained Delaunay triangulation and skeletonization for band images

    NASA Astrophysics Data System (ADS)

    Zeng, Wei; Yang, ChengLei; Meng, XiangXu; Yang, YiJun; Yang, XiuKun

    2004-09-01

    For the boundary polygons of band-images, a fast constrained Delaunay triangulation algorithm is presented and based on it an efficient skeletonization algorithm is designed. In the process of triangulation the characters of uniform grid structure and the band-polygons are utilized to improve the speed of computing the third vertex for one edge within its local ranges when forming a Delaunay triangle. The final skeleton of the band-image is derived after reducing each triangle to local skeleton lines according to its topology. The algorithm with a simple data structure is easy to understand and implement. Moreover, it can deal with multiply connected polygons on the fly. Experiments show that there is a nearly linear dependence between triangulation time and size of band-polygons randomly generated. Correspondingly, the skeletonization algorithm is also an improvement over the previously known results in terms of time. Some practical examples are given in the paper.

  11. The Dark Energy Survey Image Processing Pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morganson, E.; et al.

    The Dark Energy Survey (DES) is a five-year optical imaging campaign with the goal of understanding the origin of cosmic acceleration. DES performs a 5000 square degree survey of the southern sky in five optical bands (g,r,i,z,Y) to a depth of ~24th magnitude. Contemporaneously, DES performs a deep, time-domain survey in four optical bands (g,r,i,z) over 27 square degrees. DES exposures are processed nightly with an evolving data reduction pipeline and evaluated for image quality to determine if they need to be retaken. Difference imaging and transient source detection are also performed in the time domain component nightly. On amore » bi-annual basis, DES exposures are reprocessed with a refined pipeline and coadded to maximize imaging depth. Here we describe the DES image processing pipeline in support of DES science, as a reference for users of archival DES data, and as a guide for future astronomical surveys.« less

  12. Super-resolution reconstruction of hyperspectral images.

    PubMed

    Akgun, Toygar; Altunbasak, Yucel; Mersereau, Russell M

    2005-11-01

    Hyperspectral images are used for aerial and space imagery applications, including target detection, tracking, agricultural, and natural resource exploration. Unfortunately, atmospheric scattering, secondary illumination, changing viewing angles, and sensor noise degrade the quality of these images. Improving their resolution has a high payoff, but applying super-resolution techniques separately to every spectral band is problematic for two main reasons. First, the number of spectral bands can be in the hundreds, which increases the computational load excessively. Second, considering the bands separately does not make use of the information that is present across them. Furthermore, separate band super-resolution does not make use of the inherent low dimensionality of the spectral data, which can effectively be used to improve the robustness against noise. In this paper, we introduce a novel super-resolution method for hyperspectral images. An integral part of our work is to model the hyperspectral image acquisition process. We propose a model that enables us to represent the hyperspectral observations from different wavelengths as weighted linear combinations of a small number of basis image planes. Then, a method for applying super resolution to hyperspectral images using this model is presented. The method fuses information from multiple observations and spectral bands to improve spatial resolution and reconstruct the spectrum of the observed scene as a combination of a small number of spectral basis functions.

  13. Radiometric and geometric assessment of data from the RapidEye constellation of satellites

    USGS Publications Warehouse

    Chander, Gyanesh; Haque, Md. Obaidul; Sampath, Aparajithan; Brunn, A.; Trosset, G.; Hoffmann, D.; Roloff, S.; Thiele, M.; Anderson, C.

    2013-01-01

    To monitor land surface processes over a wide range of temporal and spatial scales, it is critical to have coordinated observations of the Earth's surface using imagery acquired from multiple spaceborne imaging sensors. The RapidEye (RE) satellite constellation acquires high-resolution satellite images covering the entire globe within a very short period of time by sensors identical in construction and cross-calibrated to each other. To evaluate the RE high-resolution Multi-spectral Imager (MSI) sensor capabilities, a cross-comparison between the RE constellation of sensors was performed first using image statistics based on large common areas observed over pseudo-invariant calibration sites (PICS) by the sensors and, second, by comparing the on-orbit radiometric calibration temporal trending over a large number of calibration sites. For any spectral band, the individual responses measured by the five satellites of the RE constellation were found to differ <2–3% from the average constellation response depending on the method used for evaluation. Geometric assessment was also performed to study the positional accuracy and relative band-to-band (B2B) alignment of the image data sets. The position accuracy was assessed by comparing the RE imagery against high-resolution aerial imagery, while the B2B characterization was performed by registering each band against every other band to ensure that the proper band alignment is provided for an image product. The B2B results indicate that the internal alignments of these five RE bands are in agreement, with bands typically registered to within 0.25 pixels of each other or better.

  14. Automated Registration of Images from Multiple Bands of Resourcesat-2 Liss-4 camera

    NASA Astrophysics Data System (ADS)

    Radhadevi, P. V.; Solanki, S. S.; Jyothi, M. V.; Varadan, G.

    2014-11-01

    Continuous and automated co-registration and geo-tagging of images from multiple bands of Liss-4 camera is one of the interesting challenges of Resourcesat-2 data processing. Three arrays of the Liss-4 camera are physically separated in the focal plane in alongtrack direction. Thus, same line on the ground will be imaged by extreme bands with a time interval of as much as 2.1 seconds. During this time, the satellite would have covered a distance of about 14 km on the ground and the earth would have rotated through an angle of 30". A yaw steering is done to compensate the earth rotation effects, thus ensuring a first level registration between the bands. But this will not do a perfect co-registration because of the attitude fluctuations, satellite movement, terrain topography, PSM steering and small variations in the angular placement of the CCD lines (from the pre-launch values) in the focal plane. This paper describes an algorithm based on the viewing geometry of the satellite to do an automatic band to band registration of Liss-4 MX image of Resourcesat-2 in Level 1A. The algorithm is using the principles of photogrammetric collinearity equations. The model employs an orbit trajectory and attitude fitting with polynomials. Then, a direct geo-referencing with a global DEM with which every pixel in the middle band is mapped to a particular position on the surface of the earth with the given attitude. Attitude is estimated by interpolating measurement data obtained from star sensors and gyros, which are sampled at low frequency. When the sampling rate of attitude information is low compared to the frequency of jitter or micro-vibration, images processed by geometric correction suffer from distortion. Therefore, a set of conjugate points are identified between the bands to perform a relative attitude error estimation and correction which will ensure the internal accuracy and co-registration of bands. Accurate calculation of the exterior orientation parameters with GCPs is not required. Instead, the relative line of sight vector of each detector in different bands in relation to the payload is addressed. With this method a band to band registration accuracy of better than 0.3 pixels could be achieved even in high hill areas.

  15. A discussion on the use of X-band SAR images in marine applications

    NASA Astrophysics Data System (ADS)

    Schiavulli, D.; Sorrentino, A.; Migliaccio, M.

    2012-10-01

    The Synthetic Aperture Radar (SAR) is able to generate images of the sea surface that can be exploited to extract geophysical information of environmental interest. In order to enhance the operational use of these data in the marine applications the revisit time is to be improved. This goal can be achieved by using SAR virtual or real constellations and/or exploiting new antenna technologies that allow huge swath and fine resolution. Within this framework, the presence of the Italian and German X-band SAR constellations is of special interest while the new SAR technologies are not nowadays operated. Although SAR images are considered to be independent of weather conditions, this is only partially true at higher frequencies, e.g. X-band. In fact, observations can present signature corresponding to high intensity precipitating clouds, i.e. rain cells. Further, ScanSAR images may be characterized by the presence of processing artifacts, called scalloping, that corrupt image interpretation. In this paper we review these key facts that are at the basis of an effective use of X-band SAR images for marine applications.

  16. HPT: A High Spatial Resolution Multispectral Sensor for Microsatellite Remote Sensing

    PubMed Central

    Takahashi, Yukihiro; Sakamoto, Yuji; Kuwahara, Toshinori

    2018-01-01

    Although nano/microsatellites have great potential as remote sensing platforms, the spatial and spectral resolutions of an optical payload instrument are limited. In this study, a high spatial resolution multispectral sensor, the High-Precision Telescope (HPT), was developed for the RISING-2 microsatellite. The HPT has four image sensors: three in the visible region of the spectrum used for the composition of true color images, and a fourth in the near-infrared region, which employs liquid crystal tunable filter (LCTF) technology for wavelength scanning. Band-to-band image registration methods have also been developed for the HPT and implemented in the image processing procedure. The processed images were compared with other satellite images, and proven to be useful in various remote sensing applications. Thus, LCTF technology can be considered an innovative tool that is suitable for future multi/hyperspectral remote sensing by nano/microsatellites. PMID:29463022

  17. Parallel exploitation of a spatial-spectral classification approach for hyperspectral images on RVC-CAL

    NASA Astrophysics Data System (ADS)

    Lazcano, R.; Madroñal, D.; Fabelo, H.; Ortega, S.; Salvador, R.; Callicó, G. M.; Juárez, E.; Sanz, C.

    2017-10-01

    Hyperspectral Imaging (HI) assembles high resolution spectral information from hundreds of narrow bands across the electromagnetic spectrum, thus generating 3D data cubes in which each pixel gathers the spectral information of the reflectance of every spatial pixel. As a result, each image is composed of large volumes of data, which turns its processing into a challenge, as performance requirements have been continuously tightened. For instance, new HI applications demand real-time responses. Hence, parallel processing becomes a necessity to achieve this requirement, so the intrinsic parallelism of the algorithms must be exploited. In this paper, a spatial-spectral classification approach has been implemented using a dataflow language known as RVCCAL. This language represents a system as a set of functional units, and its main advantage is that it simplifies the parallelization process by mapping the different blocks over different processing units. The spatial-spectral classification approach aims at refining the classification results previously obtained by using a K-Nearest Neighbors (KNN) filtering process, in which both the pixel spectral value and the spatial coordinates are considered. To do so, KNN needs two inputs: a one-band representation of the hyperspectral image and the classification results provided by a pixel-wise classifier. Thus, spatial-spectral classification algorithm is divided into three different stages: a Principal Component Analysis (PCA) algorithm for computing the one-band representation of the image, a Support Vector Machine (SVM) classifier, and the KNN-based filtering algorithm. The parallelization of these algorithms shows promising results in terms of computational time, as the mapping of them over different cores presents a speedup of 2.69x when using 3 cores. Consequently, experimental results demonstrate that real-time processing of hyperspectral images is achievable.

  18. Performance assessment of multi-frequency processing of ICU chest images for enhanced visualization of tubes and catheters

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohui; Couwenhoven, Mary E.; Foos, David H.; Doran, James; Yankelevitz, David F.; Henschke, Claudia I.

    2008-03-01

    An image-processing method has been developed to improve the visibility of tube and catheter features in portable chest x-ray (CXR) images captured in the intensive care unit (ICU). The image-processing method is based on a multi-frequency approach, wherein the input image is decomposed into different spatial frequency bands, and those bands that contain the tube and catheter signals are individually enhanced by nonlinear boosting functions. Using a random sampling strategy, 50 cases were retrospectively selected for the study from a large database of portable CXR images that had been collected from multiple institutions over a two-year period. All images used in the study were captured using photo-stimulable, storage phosphor computed radiography (CR) systems. Each image was processed two ways. The images were processed with default image processing parameters such as those used in clinical settings (control). The 50 images were then separately processed using the new tube and catheter enhancement algorithm (test). Three board-certified radiologists participated in a reader study to assess differences in both detection-confidence performance and diagnostic efficiency between the control and test images. Images were evaluated on a diagnostic-quality, 3-megapixel monochrome monitor. Two scenarios were studied: the baseline scenario, representative of today's workflow (a single-control image presented with the window/level adjustments enabled) vs. the test scenario (a control/test image pair presented with a toggle enabled and the window/level settings disabled). The radiologists were asked to read the images in each scenario as they normally would for clinical diagnosis. Trend analysis indicates that the test scenario offers improved reading efficiency while providing as good or better detection capability compared to the baseline scenario.

  19. Subband/transform functions for image processing

    NASA Technical Reports Server (NTRS)

    Glover, Daniel

    1993-01-01

    Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.

  20. Mountain building processes in the Central Andes

    NASA Technical Reports Server (NTRS)

    Bloom, A. L.; Isacks, B. L.

    1986-01-01

    False color composite images of the Thematic Mapper (TM) bands 5, 4, and 2 were examined to make visual interpretations of geological features. The use of the roam mode of image display with the International Imaging Systems (IIS) System 600 image processing package running on the IIS Model 75 was very useful. Several areas in which good comparisons with ground data existed, were examined in detail. Parallel to the visual approach, image processing methods are being developed which allow the complete use of the seven TM bands. The data was organized into easily accessible files and a visual cataloging of the quads (quarter TM scenes) with preliminary registration with the best available charts for the region. The catalog has proved to be a valuable tool for the rapid scanning of quads for a specific investigation. Integration of the data into a complete approach to the problems of uplift, deformation, and magnetism in relation to the Nazca-South American plate interaction is at an initial stage.

  1. Mountain building processes in the Central Andes

    NASA Astrophysics Data System (ADS)

    Bloom, A. L.; Isacks, B. L.

    False color composite images of the Thematic Mapper (TM) bands 5, 4, and 2 were examined to make visual interpretations of geological features. The use of the roam mode of image display with the International Imaging Systems (IIS) System 600 image processing package running on the IIS Model 75 was very useful. Several areas in which good comparisons with ground data existed, were examined in detail. Parallel to the visual approach, image processing methods are being developed which allow the complete use of the seven TM bands. The data was organized into easily accessible files and a visual cataloging of the quads (quarter TM scenes) with preliminary registration with the best available charts for the region. The catalog has proved to be a valuable tool for the rapid scanning of quads for a specific investigation. Integration of the data into a complete approach to the problems of uplift, deformation, and magnetism in relation to the Nazca-South American plate interaction is at an initial stage.

  2. Alternative method for VIIRS Moon in space view process

    NASA Astrophysics Data System (ADS)

    Anderson, Samuel; Chiang, Kwofu V.; Xiong, Xiaoxiong

    2013-09-01

    The Visible Infrared Imaging Radiometer Suite (VIIRS) is a radiometric sensing instrument currently operating onboard the Suomi National Polar-orbiting Partnership (S-NPP) spacecraft. It provides high spatial-resolution images of the emitted and reflected radiation from the Earth and its atmosphere in 22 spectral bands (16 moderate resolution bands M1-M16, 5 imaging bands I1-I5, and 1 day/night pan band DNB) spanning the visible and infrared wavelengths from 412 nm to 12 μm. Just prior to each scan it makes of the Earth, the VIIRS instrument makes a measurement of deep space to serve as a background reference. These space view (SV) measurements form a crucial input to the VIIRS calibration process and are a major determinant of its accuracy. On occasion, the orientation of the Suomi NPP spacecraft coincides with the position of the moon in such a fashion that the SV measurements include light from the moon, rendering the SV measurements unusable for calibration. This paper investigates improvements to the existing baseline SV data processing algorithm of the Sensor Data Record (SDR) processing software. The proposed method makes use of a Moon-in-SV detection algorithm that identifies moon-contaminated SV data on a scan-by-scan basis. Use of this algorithm minimizes the number of SV scans that are rejected initially, so that subsequent substitution processes are always able to find alternative substitute SV scans in the near vicinity of detected moon-contaminated scans.

  3. Hyperspectral image compressing using wavelet-based method

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  4. An airborne thematic thermal infrared and electro-optical imaging system

    NASA Astrophysics Data System (ADS)

    Sun, Xiuhong; Shu, Peter

    2011-08-01

    This paper describes an advanced Airborne Thematic Thermal InfraRed and Electro-Optical Imaging System (ATTIREOIS) and its potential applications. ATTIREOIS sensor payload consists of two sets of advanced Focal Plane Arrays (FPAs) - a broadband Thermal InfraRed Sensor (TIRS) and a four (4) band Multispectral Electro-Optical Sensor (MEOS) to approximate Landsat ETM+ bands 1,2,3,4, and 6, and LDCM bands 2,3,4,5, and 10+11. The airborne TIRS is 3-axis stabilized payload capable of providing 3D photogrammetric images with a 1,850 pixel swathwidth via pushbroom operation. MEOS has a total of 116 million simultaneous sensor counts capable of providing 3 cm spatial resolution multispectral orthophotos for continuous airborne mapping. ATTIREOIS is a complete standalone and easy-to-use portable imaging instrument for light aerial vehicle deployment. Its miniaturized backend data system operates all ATTIREOIS imaging sensor components, an INS/GPS, and an e-Gimbal™ Control Electronic Unit (ECU) with a data throughput of 300 Megabytes/sec. The backend provides advanced onboard processing, performing autonomous raw sensor imagery development, TIRS image track-recovery reconstruction, LWIR/VNIR multi-band co-registration, and photogrammetric image processing. With geometric optics and boresight calibrations, the ATTIREOIS data products are directly georeferenced with an accuracy of approximately one meter. A prototype ATTIREOIS has been configured. Its sample LWIR/EO image data will be presented. Potential applications of ATTIREOIS include: 1) Providing timely and cost-effective, precisely and directly georeferenced surface emissive and solar reflective LWIR/VNIR multispectral images via a private Google Earth Globe to enhance NASA's Earth science research capabilities; and 2) Underflight satellites to support satellite measurement calibration and validation observations.

  5. New Mission Concept Study: Energetic X-Ray Imaging Survey Telescope (EXIST)

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This Report summarizes the activity carried out under the New Mission Concept (NMC) study for a mission to conduct a sensitive all-sky imaging survey in the hard x-ray (HX) band (approximately 10-600 keV). The Energetic X-ray Imaging Survey Telescope (EXIST) mission was originally proposed for this NMC study and was then subsequently proposed for a MIDEX mission as part of this study effort. Development of the EXIST (and related) concepts continues for a future flight proposal. The hard x-ray band (approximately 10-600 keV) is nearly the final band of the astronomical spectrum still without a sensitive imaging all-sky survey. This is despite the enormous potential of this band to address a wide range of fundamental and timely objectives - from the origin and physical mechanisms of cosmological gamma-ray bursts (GRBs) to the processes on strongly magnetic neutron stars that produce soft gamma-repeaters and bursting pulsars; from the study of active galactic nuclei (AGN) and quasars to the origin and evolution of the hard x-ray diffuse background; from the nature and number of black holes and neutron stars and the accretion processes onto them to the extreme non-thermal flares of normal stars; and from searches for expected diffuse (but relatively compact) nuclear line (Ti-44) emission in uncatalogued supernova remnants to diffuse non-thermal inverse Compton emission from galaxy clusters. A high sensitivity all-sky survey mission in the hard x-ray band, with imaging to both address source confusion and time-variable background radiations, is very much needed.

  6. Landsat TM image maps of the Shirase and Siple Coast ice streams, West Antarctica

    USGS Publications Warehouse

    Ferrigno, Jane G.; Mullins, Jerry L.; Stapleton, Jo Anne; Bindschadler, Robert; Scambos, Ted A.; Bellisime, Lynda B.; Bowell, Jo-Ann; Acosta, Alex V.

    1994-01-01

    Fifteen 1: 250000 and one 1: 1000 000 scale Landsat Thematic Mapper (TM) image mosaic maps are currently being produced of the West Antarctic ice streams on the Shirase and Siple Coasts. Landsat TM images were acquired between 1984 and 1990 in an area bounded approximately by 78°-82.5°S and 120°- 160° W. Landsat TM bands 2, 3 and 4 were combined to produce a single band, thereby maximizing data content and improving the signal-to-noise ratio. The summed single band was processed with a combination of high- and low-pass filters to remove longitudinal striping and normalize solar elevation-angle effects. The images were mosaicked and transformed to a Lambert conformal conic projection using a cubic-convolution algorithm. The projection transformation was controled with ten weighted geodetic ground-control points and internal image-to-image pass points with annotation of major glaciological features. The image maps are being published in two formats: conventional printed map sheets and on a CD-ROM.

  7. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the South Bamyan mineral district in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the South Bamyan mineral district, which has areas with a spectral reflectance anomaly that require field investigation. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008),but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for South Bamyan) and the WGS84 datum. The final image mosaics for the South Bamyan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  8. Playback system designed for X-Band SAR

    NASA Astrophysics Data System (ADS)

    Yuquan, Liu; Changyong, Dou

    2014-03-01

    SAR(Synthetic Aperture Radar) has extensive application because it is daylight and weather independent. In particular, X-Band SAR strip map, designed by Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, provides high ground resolution images, at the same time it has a large spatial coverage and a short acquisition time, so it is promising in multi-applications. When sudden disaster comes, the emergency situation acquires radar signal data and image as soon as possible, in order to take action to reduce loss and save lives in the first time. This paper summarizes a type of X-Band SAR playback processing system designed for disaster response and scientific needs. It describes SAR data workflow includes the payload data transmission and reception process. Playback processing system completes signal analysis on the original data, providing SAR level 0 products and quick image. Gigabit network promises radar signal transmission efficiency from recorder to calculation unit. Multi-thread parallel computing and ping pong operation can ensure computation speed. Through gigabit network, multi-thread parallel computing and ping pong operation, high speed data transmission and processing meet the SAR radar data playback real time requirement.

  9. Mutual information registration of multi-spectral and multi-resolution images of DigitalGlobe's WorldView-3 imaging satellite

    NASA Astrophysics Data System (ADS)

    Miecznik, Grzegorz; Shafer, Jeff; Baugh, William M.; Bader, Brett; Karspeck, Milan; Pacifici, Fabio

    2017-05-01

    WorldView-3 (WV-3) is a DigitalGlobe commercial, high resolution, push-broom imaging satellite with three instruments: visible and near-infrared VNIR consisting of panchromatic (0.3m nadir GSD) plus multi-spectral (1.2m), short-wave infrared SWIR (3.7m), and multi-spectral CAVIS (30m). Nine VNIR bands, which are on one instrument, are nearly perfectly registered to each other, whereas eight SWIR bands, belonging to the second instrument, are misaligned with respect to VNIR and to each other. Geometric calibration and ortho-rectification results in a VNIR/SWIR alignment which is accurate to approximately 0.75 SWIR pixel at 3.7m GSD, whereas inter-SWIR, band to band registration is 0.3 SWIR pixel. Numerous high resolution, spectral applications, such as object classification and material identification, require more accurate registration, which can be achieved by utilizing image processing algorithms, for example Mutual Information (MI). Although MI-based co-registration algorithms are highly accurate, implementation details for automated processing can be challenging. One particular challenge is how to compute bin widths of intensity histograms, which are fundamental building blocks of MI. We solve this problem by making the bin widths proportional to instrument shot noise. Next, we show how to take advantage of multiple VNIR bands, and improve registration sensitivity to image alignment. To meet this goal, we employ Canonical Correlation Analysis, which maximizes VNIR/SWIR correlation through an optimal linear combination of VNIR bands. Finally we explore how to register images corresponding to different spatial resolutions. We show that MI computed at a low-resolution grid is more sensitive to alignment parameters than MI computed at a high-resolution grid. The proposed modifications allow us to improve VNIR/SWIR registration to better than ¼ of a SWIR pixel, as long as terrain elevation is properly accounted for, and clouds and water are masked out.

  10. A novel built-up spectral index developed by using multiobjective particle-swarm-optimization technique

    NASA Astrophysics Data System (ADS)

    Sameen, Maher Ibrahim; Pradhan, Biswajeet

    2016-06-01

    In this study, we propose a novel built-up spectral index which was developed by using particle-swarm-optimization (PSO) technique for Worldview-2 images. PSO was used to select the relevant bands from the eight (8) spectral bands of Worldview-2 image and then were used for index development. Multiobiective optimization was used to minimize the number of selected spectral bands and to maximize the classification accuracy. The results showed that the most important and relevant spectral bands among the eight (8) bands for built-up area extraction are band4 (yellow) and band7 (NIR1). Using those relevant spectral bands, the final spectral index was form ulated by developing a normalized band ratio. The validation of the classification result using the proposed spectral index showed that our novel spectral index performs well compared to the existing WV -BI index. The accuracy assessment showed that the new proposed spectral index could extract built-up areas from Worldview-2 image with an area under curve (AUC) of (0.76) indicating the effectiveness of the developed spectral index. Further improvement could be done by using several datasets during the index development process to ensure the transferability of the index to other datasets and study areas.

  11. (abstract) Topographic Signatures in Geology

    NASA Technical Reports Server (NTRS)

    Farr, Tom G.; Evans, Diane L.

    1996-01-01

    Topographic information is required for many Earth Science investigations. For example, topography is an important element in regional and global geomorphic studies because it reflects the interplay between the climate-driven processes of erosion and the tectonic processes of uplift. A number of techniques have been developed to analyze digital topographic data, including Fourier texture analysis. A Fourier transform of the topography of an area allows the spatial frequency content of the topography to be analyzed. Band-pass filtering of the transform produces images representing the amplitude of different spatial wavelengths. These are then used in a multi-band classification to map units based on their spatial frequency content. The results using a radar image instead of digital topography showed good correspondence to a geologic map, however brightness variations in the image unrelated to topography caused errors. An additional benefit to the use of Fourier band-pass images for the classification is that the textural signatures of the units are quantative measures of the spatial characteristics of the units that may be used to map similar units in similar environments.

  12. Blind deconvolution of astronomical images with band limitation determined by optical system parameters

    NASA Astrophysics Data System (ADS)

    Luo, L.; Fan, M.; Shen, M. Z.

    2007-07-01

    Atmospheric turbulence greatly limits the spatial resolution of astronomical images acquired by the large ground-based telescope. The record image obtained from telescope was thought as a convolution result of the object function and the point spread function. The statistic relationship of the images measured data, the estimated object and point spread function was in accord with the Bayes conditional probability distribution, and the maximum-likelihood formulation was found. A blind deconvolution approach based on the maximum-likelihood estimation technique with real optical band limitation constraint is presented for removing the effect of atmospheric turbulence on this class images through the minimization of the convolution error function by use of the conjugation gradient optimization algorithm. As a result, the object function and the point spread function could be estimated from a few record images at the same time by the blind deconvolution algorithm. According to the principle of Fourier optics, the relationship between the telescope optical system parameters and the image band constraint in the frequency domain was formulated during the image processing transformation between the spatial domain and the frequency domain. The convergence of the algorithm was increased by use of having the estimated function variable (also is the object function and the point spread function) nonnegative and the point-spread function band limited. Avoiding Fourier transform frequency components beyond the cut off frequency lost during the image processing transformation when the size of the sampled image data, image spatial domain and frequency domain were the same respectively, the detector element (e.g. a pixels in the CCD) should be less than the quarter of the diffraction speckle diameter of the telescope for acquiring the images on the focal plane. The proposed method can easily be applied to the case of wide field-view turbulent-degraded images restoration because of no using the object support constraint in the algorithm. The performance validity of the method is examined by the computer simulation and the restoration of the real Alpha Psc astronomical image data. The results suggest that the blind deconvolution with the real optical band constraint can remove the effect of the atmospheric turbulence on the observed images and the spatial resolution of the object image can arrive at or exceed the diffraction-limited level.

  13. Electrophoresis gel image processing and analysis using the KODAK 1D software.

    PubMed

    Pizzonia, J

    2001-06-01

    The present article reports on the performance of the KODAK 1D Image Analysis Software for the acquisition of information from electrophoresis experiments and highlights the utility of several mathematical functions for subsequent image processing, analysis, and presentation. Digital images of Coomassie-stained polyacrylamide protein gels containing molecular weight standards and ethidium bromide stained agarose gels containing DNA mass standards are acquired using the KODAK Electrophoresis Documentation and Analysis System 290 (EDAS 290). The KODAK 1D software is used to optimize lane and band identification using features such as isomolecular weight lines. Mathematical functions for mass standard representation are presented, and two methods for estimation of unknown band mass are compared. Given the progressive transition of electrophoresis data acquisition and daily reporting in peer-reviewed journals to digital formats ranging from 8-bit systems such as EDAS 290 to more expensive 16-bit systems, the utility of algorithms such as Gaussian modeling, which can correct geometric aberrations such as clipping due to signal saturation common at lower bit depth levels, is discussed. Finally, image-processing tools that can facilitate image preparation for presentation are demonstrated.

  14. White-Light Optical Information Processing and Holography.

    DTIC Science & Technology

    1984-06-22

    Processing, Image Deblurring , Source Encoding, Signal Sampling, Coherence Measurement, Noise Performance, / Pseudocolor Encoding. , ’ ’ * .~ 10.ASS!RACT...o 2.1 Broad Spectral Band Color Image Deblurring .. . 4 2.2 Noise Performance ...... ...... .. . 4 2.3 Pseudocolor Encoding with Three Primary...spectra. This technique is particularly suitable for linear smeared color image deblurring . 2.2 Noise Performance In this period, we have also

  15. Space Radar Image of Oberpfaffenhofen, Germany

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a false-color, three-frequency image of the Oberpfaffenhofen supersite, southwest of Munich in southern Germany, which shows the differences in what the three radar bands can see on the ground. The image covers a 27- by 36-kilometer (17- by 22-mile) area. The center of the site is 48.09 degrees north and 11.29 degrees east. The image was acquired by the Spaceborne Imaging Radar C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) onboard space shuttle Endeavour on April 13, 1994, just after a heavy storm which covered the all area with 20 centimeters (8 inches) of snow. The dark area in the center of the image is Lake Ammersee. The two smaller lakes above the Ammersee are the Worthsee and the Pilsensee. On the right of the image is the tip of the Starnbergersee. The outskirt of the city of Munich can be seen at the top of the image. The Oberpfaffenhofen supersite is the major test site for X-SAR calibration and scientific experiments such as ecology, hydrology and geology. This color composite image is a three-frequency overlay. L-band total power was assigned red, the C-band total power is shown in green and the X-band VV polarization appears blue. The colors on the image stress the differences between the L-band, C-band and X-band images. If the three frequencies were seeing the same thing, the image will appear in black and white. For example, the blue areas corresponds to area for which the X-band backscatter is relatively higher than the backscatter at L-and C-band; this behavior is characteristic of clear cuts or shorter vegetation. Similarly, the forested areas have a reddish tint. Finally, the green areas seen at the southern tip of both the Ammersee and the Pilsensee lakes indicate a marshy area. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.v. (DLR), the major partner in science, operations and data processing of X-SAR.

  16. Integrated filter and detector array for spectral imaging

    NASA Technical Reports Server (NTRS)

    Labaw, Clayton C. (Inventor)

    1992-01-01

    A spectral imaging system having an integrated filter and photodetector array is disclosed. The filter has narrow transmission bands which vary in frequency along the photodetector array. The frequency variation of the transmission bands is matched to, and aligned with, the frequency variation of a received spectral image. The filter is deposited directly on the photodetector array by a low temperature deposition process. By depositing the filter directly on the photodetector array, permanent alignment is achieved for all temperatures, spectral crosstalk is substantially eliminated, and a high signal to noise ratio is achieved.

  17. Discrimination of iron ore deposits of granulite terrain of Southern Peninsular India using ASTER data

    NASA Astrophysics Data System (ADS)

    Rajendran, Sankaran; Thirunavukkarasu, A.; Balamurugan, G.; Shankar, K.

    2011-04-01

    This work describes a new image processing technique for discriminating iron ores (magnetite quartzite deposits) and associated lithology in high-grade granulite region of Salem, Southern Peninsular India using visible, near-infrared and short wave infrared reflectance data of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER). Image spectra show that the magnetite quartzite and associated lithology of garnetiferrous pyroxene granulite, hornblende biotite gneiss, amphibolite, dunite, and pegmatite have absorption features around spectral bands 1, 3, 5, and 7. ASTER band ratios ((1 + 3)/2, (3 + 5)/4, (5 + 7)/6) in RGB are constructed by summing the bands representing the shoulders of absorption features as a numerator, and the band located nearest the absorption feature as a denominator to map iron ores and band ratios ((2 + 4)/3, (5 + 7)/6, (7 + 9)/8) in RGB for associated lithology. The results show that ASTER band ratios ((1 + 3)/2, (3 + 5)/4, (5 + 7)/6) in a Red-Green-Blue (RGB) color combination identifies the iron ores much better than previously published ASTER band ratios analysis. A Principal Component Analysis (PCA) is applied to reduce redundant information in highly correlated bands. PCA (3, 2, and 1 for iron ores and 5, 4, 2 for granulite rock) in RGB enabled the discrimination between the iron ores and garnetiferrous pyroxene granulite rock. Thus, this image processing technique is very much suitable for discriminating the different types of rocks of granulite region. As outcome of the present work, the geology map of Salem region is provided based on the interpretation of ASTER image results and field verification work. It is recommended that the proposed methods have great potential for mapping of iron ores and associated lithology of granulite region with similar rock units of granulite regions of Southern Peninsular India. This work also demonstrates the ability of ASTER's to provide information on iron ores, which is valuable for mineral prospecting and exploration activities.

  18. Effective use of principal component analysis with high resolution remote sensing data to delineate hydrothermal alteration and carbonate rocks

    NASA Technical Reports Server (NTRS)

    Feldman, Sandra C.

    1987-01-01

    Methods of applying principal component (PC) analysis to high resolution remote sensing imagery were examined. Using Airborne Imaging Spectrometer (AIS) data, PC analysis was found to be useful for removing the effects of albedo and noise and for isolating the significant information on argillic alteration, zeolite, and carbonate minerals. An effective technique for using PC analysis using an input the first 16 AIS bands, 7 intermediate bands, and the last 16 AIS bands from the 32 flat field corrected bands between 2048 and 2337 nm. Most of the significant mineralogical information resided in the second PC. PC color composites and density sliced images provided a good mineralogical separation when applied to a AIS data set. Although computer intensive, the advantage of PC analysis is that it employs algorithms which already exist on most image processing systems.

  19. An optimized digital watermarking algorithm in wavelet domain based on differential evolution for color image.

    PubMed

    Cui, Xinchun; Niu, Yuying; Zheng, Xiangwei; Han, Yingshuai

    2018-01-01

    In this paper, a new color watermarking algorithm based on differential evolution is proposed. A color host image is first converted from RGB space to YIQ space, which is more suitable for the human visual system. Then, apply three-level discrete wavelet transformation to luminance component Y and generate four different frequency sub-bands. After that, perform singular value decomposition on these sub-bands. In the watermark embedding process, apply discrete wavelet transformation to a watermark image after the scrambling encryption processing. Our new algorithm uses differential evolution algorithm with adaptive optimization to choose the right scaling factors. Experimental results show that the proposed algorithm has a better performance in terms of invisibility and robustness.

  20. Impacts of Cross-Platform Vicarious Calibration on the Deep Blue Aerosol Retrievals for Moderate Resolution Imaging Spectroradiometer Aboard Terra

    NASA Technical Reports Server (NTRS)

    Jeong, Myeong-Jae; Hsu, N. Christina; Kwiatkowska, Ewa J.; Franz, Bryan A.; Meister, Gerhard; Salustro, Clare E.

    2012-01-01

    The retrieval of aerosol properties from spaceborne sensors requires highly accurate and precise radiometric measurements, thus placing stringent requirements on sensor calibration and characterization. For the Terra/Moderate Resolution Imaging Spedroradiometer (MODIS), the characteristics of the detectors of certain bands, particularly band 8 [(B8); 412 nm], have changed significantly over time, leading to increased calibration uncertainty. In this paper, we explore a possibility of utilizing a cross-calibration method developed for characterizing the Terral MODIS detectors in the ocean bands by the National Aeronautics and Space Administration Ocean Biology Processing Group to improve aerosol retrieval over bright land surfaces. We found that the Terra/MODIS B8 reflectance corrected using the cross calibration method resulted in significant improvements for the retrieved aerosol optical thickness when compared with that from the Multi-angle Imaging Spectroradiometer, Aqua/MODIS, and the Aerosol Robotic Network. The method reported in this paper is implemented for the operational processing of the Terra/MODIS Deep Blue aerosol products.

  1. Portable real-time color night vision

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; Hogervorst, Maarten A.

    2008-03-01

    We developed a simple and fast lookup-table based method to derive and apply natural daylight colors to multi-band night-time images. The method deploys an optimal color transformation derived from a set of samples taken from a daytime color reference image. The colors in the resulting colorized multiband night-time images closely resemble the colors in the daytime color reference image. Also, object colors remain invariant under panning operations and are independent of the scene content. Here we describe the implementation of this method in two prototype portable dual band realtime night vision systems. One system provides co-aligned visual and near-infrared bands of two image intensifiers, the other provides co-aligned images from a digital image intensifier and an uncooled longwave infrared microbolometer. The co-aligned images from both systems are further processed by a notebook computer. The color mapping is implemented as a realtime lookup table transform. The resulting colorised video streams can be displayed in realtime on head mounted displays and stored on the hard disk of the notebook computer. Preliminary field trials demonstrate the potential of these systems for applications like surveillance, navigation and target detection.

  2. Progressive sample processing of band selection for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Liu, Keng-Hao; Chien, Hung-Chang; Chen, Shih-Yu

    2017-10-01

    Band selection (BS) is one of the most important topics in hyperspectral image (HSI) processing. The objective of BS is to find a set of representative bands that can represent the whole image with lower inter-band redundancy. Many types of BS algorithms were proposed in the past. However, most of them can be carried on in an off-line manner. It means that they can only be implemented on the pre-collected data. Those off-line based methods are sometime useless for those applications that are timeliness, particular in disaster prevention and target detection. To tackle this issue, a new concept, called progressive sample processing (PSP), was proposed recently. The PSP is an "on-line" framework where the specific type of algorithm can process the currently collected data during the data transmission under band-interleavedby-sample/pixel (BIS/BIP) protocol. This paper proposes an online BS method that integrates a sparse-based BS into PSP framework, called PSP-BS. In PSP-BS, the BS can be carried out by updating BS result recursively pixel by pixel in the same way that a Kalman filter does for updating data information in a recursive fashion. The sparse regression is solved by orthogonal matching pursuit (OMP) algorithm, and the recursive equations of PSP-BS are derived by using matrix decomposition. The experiments conducted on a real hyperspectral image show that the PSP-BS can progressively output the BS status with very low computing time. The convergence of BS results during the transmission can be quickly achieved by using a rearranged pixel transmission sequence. This significant advantage allows BS to be implemented in a real time manner when the HSI data is transmitted pixel by pixel.

  3. Hyperspectral imaging technique for detection of poultry fecal residues on food processing equipments

    NASA Astrophysics Data System (ADS)

    Cho, Byoung-Kwan; Kim, Moon S.; Chen, Yud-Ren

    2005-11-01

    Emerging concerns about safety and security in current mass production of food products necessitate rapid and reliable inspection for contaminant-free products. Diluted fecal residues on poultry processing plant equipment surface, not easily discernable from water by human eye, are contamination sources for poultry carcasses. Development of sensitive detection methods for fecal residues is essential to ensure safe production of poultry carcasses. Hyperspectral imaging techniques have shown good potential for detecting of the presence of fecal and other biological substances on food and processing equipment surfaces. In this study, use of high spatial resolution hyperspectral reflectance and fluorescence imaging (with UV-A excitation) is presented as a tool for selecting a few multispectral bands to detect diluted fecal and ingesta residues on materials used for manufacturing processing equipment. Reflectance and fluorescence imaging methods were compared for potential detection of a range of diluted fecal residues on the surfaces of processing plant equipment. Results showed that low concentrations of poultry feces and ingesta, diluted up to 1:100 by weight with double distilled water, could be detected using hyperspectral fluorescence images with an accuracy of 97.2%. Spectral bands determined in this study could be used for developing a real-time multispectral inspection device for detection of harmful organic residues on processing plant equipment.

  4. Einstein-Podolsky-Rosen Entanglement of Narrow-Band Photons from Cold Atoms.

    PubMed

    Lee, Jong-Chan; Park, Kwang-Kyoon; Zhao, Tian-Ming; Kim, Yoon-Ho

    2016-12-16

    Einstein-Podolsky-Rosen (EPR) entanglement introduced in 1935 deals with two particles that are entangled in their positions and momenta. Here we report the first experimental demonstration of EPR position-momentum entanglement of narrow-band photon pairs generated from cold atoms. By using two-photon quantum ghost imaging and ghost interference, we demonstrate explicitly that the narrow-band photon pairs violate the separability criterion, confirming EPR entanglement. We further demonstrate continuous variable EPR steering for positions and momenta of the two photons. Our new source of EPR-entangled narrow-band photons is expected to play an essential role in spatially multiplexed quantum information processing, such as, storage of quantum correlated images, quantum interface involving hyperentangled photons, etc.

  5. Einstein-Podolsky-Rosen Entanglement of Narrow-Band Photons from Cold Atoms

    NASA Astrophysics Data System (ADS)

    Lee, Jong-Chan; Park, Kwang-Kyoon; Zhao, Tian-Ming; Kim, Yoon-Ho

    2016-12-01

    Einstein-Podolsky-Rosen (EPR) entanglement introduced in 1935 deals with two particles that are entangled in their positions and momenta. Here we report the first experimental demonstration of EPR position-momentum entanglement of narrow-band photon pairs generated from cold atoms. By using two-photon quantum ghost imaging and ghost interference, we demonstrate explicitly that the narrow-band photon pairs violate the separability criterion, confirming EPR entanglement. We further demonstrate continuous variable EPR steering for positions and momenta of the two photons. Our new source of EPR-entangled narrow-band photons is expected to play an essential role in spatially multiplexed quantum information processing, such as, storage of quantum correlated images, quantum interface involving hyperentangled photons, etc.

  6. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.

    PubMed

    Park, Chulhee; Kang, Moon Gi

    2016-05-18

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.

  7. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition

    PubMed Central

    Park, Chulhee; Kang, Moon Gi

    2016-01-01

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381

  8. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Balkhab mineral district in Afghanistan: Chapter B in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Balkhab mineral district, which has copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Balkhab) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the Balkhab area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Balkhab study area, one subarea was designated for detailed field investigations (that is, the Balkhab Prospect subarea); this subarea was extracted from the area's image mosaic and is provided as separate embedded geotiff images.

  9. Space Radar Image of Saline Valley, California

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-dimensional perspective view of Saline Valley, about 30 km (19 miles) east of the town of Independence, California created by combining two spaceborne radar images using a technique known as interferometry. Visualizations like this one are helpful to scientists because they clarify the relationships of the different types of surfaces detected by the radar and the shapes of the topographic features such as mountains and valleys. The view is looking southwest across Saline Valley. The high peaks in the background are the Inyo Mountains, which rise more than 3,000 meters (10,000 feet) above the valley floor. The dark blue patch near the center of the image is an area of sand dunes. The brighter patches to the left of the dunes are the dry, salty lake beds of Saline Valley. The brown and orange areas are deposits of boulders, gravel and sand known as alluvial fans. The image was constructed by overlaying a color composite radar image on top of a digital elevation map. The radar image was taken by the Spaceborne Imaging Radar-C/X-bandSynthetic Aperture Radar (SIR-C/X-SAR) on board the space shuttleEndeavour in October 1994. The digital elevation map was producedusing radar interferometry, a process in which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. The elevation data were derived from a 1,500-km-long (930-mile) digital topographic map processed at JPL. Radar image data are draped over the topography to provide the color with the following assignments: red is L-band vertically transmitted, vertically received; green is C-band vertically transmitted, vetically received; and blue is the ratio of C-band vertically transmitted, vertically received to L-band vertically transmitted, vertically received. This image is centered near 36.8 degrees north latitude and 117.7 degrees west longitude. No vertical exaggeration factor has been applied to the data. SIR-C/X-SAR, a joint mission of the German, Italian, and the United States space agencies, is part of NASA's Mission to Planet Earth.

  10. Use of airborne imaging spectrometer data to map minerals associated with hydrothermally altered rocks in the northern grapevine mountains, Nevada, and California

    USGS Publications Warehouse

    Kruse, F.A.

    1988-01-01

    Three flightlines of Airborne Imaging Spectrometer (AIS) data, acquired over the northern Grapevine Mountains, Nevada, and California, were used to map minerals associated with hydrothermally altered rocks. The data were processed to remove vertical striping, normalized using an equal area normalization, and reduced to reflectance relative to an average spectrum derived from the data. An algorithm was developed to automatically calculate the absorption band parameters band position, band depth, and band width for the strongest absorption feature in each pixel. These parameters were mapped into an intensity, hue, saturation (IHS) color system to produce a single color image that summarized the absorption band information, This image was used to map areas of potential alteration based upon the predicted relationships between the color image and mineral absorption band. Individual AIS spectra for these areas were then examined to identify specific minerals. Two types of alteration were mapped with the AIS data. Areas of quartz-sericite-pyrite alteration were identified based upon a strong absorption feature near 2.21 ??m, a weak shoulder near 2.25 ??m, and a weak absorption band near 2.35 ??m caused by sericite (fine-grained muscovite). Areas of argillic alteration were defined based on the presence of montmorillonite, identified by a weak to moderate absorption feature near 2.21 ??m and the absence of the 2.35 ??m band. Montmorillonite could not be identified in mineral mixtures. Calcite and dolomite were identified based on sharp absorption features near 2.34 and 2.32 ??m, respectively. Areas of alteration identified using the AIS data corresponded well with areas mapped using field mapping, field reflectance spectra, and laboratory spectral measurements. ?? 1988.

  11. Multispectral simulation environment for modeling low-light-level sensor systems

    NASA Astrophysics Data System (ADS)

    Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.

    1998-11-01

    Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low- light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.

  12. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Katawas mineral district in Afghanistan: Chapter N in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Katawas mineral district, which has gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©AXA, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Katawas) and the WGS84 datum. The final image mosaics are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Katawas study area, one subarea was designated for detailed field investigation (that is, the Gold subarea); this subarea was extracted from the area's image mosaic and is provided as a separate embedded geotiff image.

  13. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the North Bamyan mineral district in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the North Bamyan mineral district, which has copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for North Bamyan) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the North Bamyan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  14. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Tourmaline mineral district in Afghanistan: Chapter J in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Tourmaline mineral district, which has tin deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Tourmaline) and the WGS84 datum. The final image mosaics were subdivided into four overlapping tiles or quadrants because of the large size of the target area. The four image tiles (or quadrants) for the Tourmaline area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  15. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Ghazni2 mineral district in Afghanistan: Chapter EE in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2014-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ghazni2 mineral district, which has spectral reflectance anomalies indicative of gold, mercury, and sulfur deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA, 2008, 2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Ghazni2) and the WGS84 datum. The images for the Ghazni2 area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  16. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Ahankashan mineral district in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ahankashan mineral district, which has copper and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008, 2009, 2010),but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Ahankashan) and the WGS84 datum. The final image mosaics were subdivided into five overlapping tiles or quadrants because of the large size of the target area. The five image tiles (or quadrants) for the Ahankashan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  17. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Ghazni1 mineral district in Afghanistan: Chapter DD in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2014-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ghazni1 mineral district, which has spectral reflectance anomalies indicative of clay, aluminum, gold, silver, mercury, and sulfur deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA, 2008, 2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Ghazni1) and the WGS84 datum. The images for the Ghazni1 area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  18. Jupiter

    NASA Image and Video Library

    1998-06-04

    This processed color image of Jupiter was produced in 1990 by the U.S. Geological Survey from a Voyager image captured in 1979. Zones of light-colored, ascending clouds alternate with bands of dark, descending clouds. http://photojournal.jpl.nasa.gov/catalog/PIA00343

  19. System and method for progressive band selection for hyperspectral images

    NASA Technical Reports Server (NTRS)

    Fisher, Kevin (Inventor)

    2013-01-01

    Disclosed herein are systems, methods, and non-transitory computer-readable storage media for progressive band selection for hyperspectral images. A system having module configured to control a processor to practice the method calculates a virtual dimensionality of a hyperspectral image having multiple bands to determine a quantity Q of how many bands are needed for a threshold level of information, ranks each band based on a statistical measure, selects Q bands from the multiple bands to generate a subset of bands based on the virtual dimensionality, and generates a reduced image based on the subset of bands. This approach can create reduced datasets of full hyperspectral images tailored for individual applications. The system uses a metric specific to a target application to rank the image bands, and then selects the most useful bands. The number of bands selected can be specified manually or calculated from the hyperspectral image's virtual dimensionality.

  20. Optical data processing and projected applications of the ERTS-1 imagery covering the 1973 Mississippi River Valley floods

    USGS Publications Warehouse

    Deutsch, Morris; Ruggles, Fred

    1974-01-01

    Flooding along the Mississippi River and some of its tributaries was detected by the multispectral scanner (MSS) on the Earth Resources Technology Satellite (ERTS-1) on at least three orbits during the spring of 1973. The ERTS data provided the first opportunity for mapping the regional extent of flooding at the time of the imagery. Special optical data processing techniques were used to produce a variety of multispectral color composites enhancing flood-plain details. One of these, a 2-color composite of near infrared bands 6 and 7, was enlarged and registered to 1:250,000-scale topographic maps and used as the basis for preparation of flood image maps. Two specially filtered 3-color composites of MSS bands 5, 6, and 7 and 4, 5, and 7 were prepared to aid in the interpretation of the data. The extent of the flooding was vividly depicted on a single image by 2-color temporal composites produced on the additive-color viewer using band 7 flood data superimposed on pre-flood band 7 images. On May 24, when the floodwaters at St. Louis receded to bankfull stage, imagery was again obtained by ERTS. Analysis of temporal data composites of the pre-flood and post-flood band 7 images indicate that changes in surface reflectance characteristics caused by the flooding can be delineated, thus making it possible to map the overall area flooded without the necessity of a real-time system to track and image the peak flood waves. Regional planning and disaster relief agencies such as the Corps of Engineers, Office of Emergency Preparedness, Soil Conservation Service, interstate river basin commissions and state agencies, as well as private lending and insurance institutions, have indicated strong potential applications for ERTS image-maps of flood-prone areas.

  1. The development of machine technology processing for earth resource survey

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A.

    1970-01-01

    The following technologies are considered for automatic processing of earth resources data: (1) registration of multispectral and multitemporal images, (2) digital image display systems, (3) data system parameter effects on satellite remote sensing systems, and (4) data compression techniques based on spectral redundancy. The importance of proper spectral band and compression algorithm selections is pointed out.

  2. Analysis and Evaluation of the LANDSAT-4 MSS and TM Sensors and Ground Data Processing Systems: Early Results

    NASA Technical Reports Server (NTRS)

    Bernstein, R.; Lotspiech, J. B.

    1985-01-01

    The MSS and TM sensor performances were evaluated by studying both the sensors and the characteristics of the data. Information content analysis, image statistics, band-to-band registration, the presence of failed or failing detectors, and sensor resolution are discussed. The TM data were explored from the point of view of adequacy of the ground processing and improvements that could be made to compensate for sensor problems and deficiencies. Radiometric correction processing, compensation for a failed detector, and geometric correction processing are also considered.

  3. A copyright protection scheme for digital images based on shuffled singular value decomposition and visual cryptography.

    PubMed

    Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta

    2016-01-01

    This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.

  4. Micro-optics for simultaneous multi-spectral imaging applied to chemical/biological and IED detection

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele

    2012-06-01

    Using diffractive micro-lenses configured in an array and placed in close proximity to the focal plane array will enable a small compact simultaneous multispectral imaging camera. This approach can be applied to spectral regions from the ultraviolet (UV) to the long-wave infrared (LWIR). The number of simultaneously imaged spectral bands is determined by the number of individually configured diffractive optical micro-lenses (lenslet) in the array. Each lenslet images at a different wavelength determined by the blaze and set at the time of manufacturing based on application. In addition, modulation of the focal length of the lenslet array with piezoelectric or electro-static actuation will enable spectral band fill-in allowing hyperspectral imaging. Using the lenslet array with dual-band detectors will increase the number of simultaneous spectral images by a factor of two when utilizing multiple diffraction orders. Configurations and concept designs will be presented for detection application for biological/chemical agents, buried IED's and reconnaissance. The simultaneous detection of multiple spectral images in a single frame of data enhances the image processing capability by eliminating temporal differences between colors and enabling a handheld instrument that is insensitive to motion.

  5. Target discrimination of man-made objects using passive polarimetric signatures acquired in the visible and infrared spectral bands

    NASA Astrophysics Data System (ADS)

    Lavigne, Daniel A.; Breton, Mélanie; Fournier, Georges; Charette, Jean-François; Pichette, Mario; Rivet, Vincent; Bernier, Anne-Pier

    2011-10-01

    Surveillance operations and search and rescue missions regularly exploit electro-optic imaging systems to detect targets of interest in both the civilian and military communities. By incorporating the polarization of light as supplementary information to such electro-optic imaging systems, it is possible to increase their target discrimination capabilities, considering that man-made objects are known to depolarized light in different manner than natural backgrounds. As it is known that electro-magnetic radiation emitted and reflected from a smooth surface observed near a grazing angle becomes partially polarized in the visible and infrared wavelength bands, additional information about the shape, roughness, shading, and surface temperatures of difficult targets can be extracted by processing effectively such reflected/emitted polarized signatures. This paper presents a set of polarimetric image processing algorithms devised to extract meaningful information from a broad range of man-made objects. Passive polarimetric signatures are acquired in the visible, shortwave infrared, midwave infrared, and longwave infrared bands using a fully automated imaging system developed at DRDC Valcartier. A fusion algorithm is used to enable the discrimination of some objects lying in shadowed areas. Performance metrics, derived from the computed Stokes parameters, characterize the degree of polarization of man-made objects. Field experiments conducted during winter and summer time demonstrate: 1) the utility of the imaging system to collect polarized signatures of different objects in the visible and infrared spectral bands, and 2) the enhanced performance of target discrimination and fusion algorithms to exploit the polarized signatures of man-made objects against cluttered backgrounds.

  6. An Overview of the Landsat Data Continuity Mission

    NASA Technical Reports Server (NTRS)

    Irons, James R.; Dwyer, John L.

    2010-01-01

    The advent of the Landsat Data Continuity Mission (LDCM), currently with a launch readiness date of December, 2012, will see evolutionary changes in the Landsat data products available from the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center. The USGS initiated a revolution in 2009 when EROS began distributing Landsat data products at no cost to requestors in contrast to the past practice of charging the cost of fulfilling a request; that is, charging $600 per Landsat scene. To implement this drastic change, EROS terminated data processing options for requestors and began to produce all data products using a consistent processing recipe. EROS plans to continue this practice for the LDCM and will required new algorithms to process data from the LDCM sensors. All previous Landsat satellites flew multispectral scanners to collect image data of the global land surface. Additionally, Landsats 4, 5, and 7 flew sensors that acquired imagery for both reflective spectral bands and a single thermal band. In contrast, the LDCM will carry two pushbroom sensors; the Operational Land Imager (OLI) for reflective spectral bands and the Thermal InfraRed Sensor (TIRS) for two thermal bands. EROS is developing the ground data processing system that will both calibrate and correct the data from the thousands of detectors employed by the pushbroom sensors and that will also combine the data from the two sensors to create a single data product with registered data for all of the OLI and TIRS bands.

  7. Infrared hyperspectral imaging sensor for gas detection

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele

    2000-11-01

    A small light weight man portable imaging spectrometer has many applications; gas leak detection, flare analysis, threat warning, chemical agent detection, just to name a few. With support from the US Air Force and Navy, Pacific Advanced Technology has developed a small man portable hyperspectral imaging sensor with an embedded DSP processor for real time processing that is capable of remotely imaging various targets such as gas plums, flames and camouflaged targets. Based upon their spectral signature the species and concentration of gases can be determined. This system has been field tested at numerous places including White Mountain, CA, Edwards AFB, and Vandenberg AFB. Recently evaluation of the system for gas detection has been performed. This paper presents these results. The system uses a conventional infrared camera fitted with a diffractive optic that images as well as disperses the incident radiation to form spectral images that are collected in band sequential mode. Because the diffractive optic performs both imaging and spectral filtering, the lens system consists of only a single element that is small, light weight and robust, thus allowing man portability. The number of spectral bands are programmable such that only those bands of interest need to be collected. The system is entirely passive, therefore, easily used in a covert operation. Currently Pacific Advanced Technology is working on the next generation of this camera system that will have both an embedded processor as well as an embedded digital signal processor in a small hand held camera configuration. This will allow the implementation of signal and image processing algorithms for gas detection and identification in real time. This paper presents field test data on gas detection and identification as well as discuss the signal and image processing used to enhance the gas visibility. Flow rates as low as 0.01 cubic feet per minute have been imaged with this system.

  8. Processing and analysis of commercial satellite image data of the nuclear accident near Chernobyl, U.S.S.R.

    USGS Publications Warehouse

    Sadowski, Franklin G.; Covington, Steven J.

    1987-01-01

    Advanced digital processing techniques were applied to Landsat-5 Thematic Mapper (TM) data and SPOT highresolution visible (HRV) panchromatic data to maximize the utility of images of a nuclear powerplant emergency at Chernobyl in the Soviet Ukraine. The images demonstrate the unique interpretive capabilities provided by the numerous spectral bands of the Thematic Mapper and the high spatial resolution of the SPOT HRV sensor.

  9. Automatic rice crop height measurement using a field server and digital image processing.

    PubMed

    Sritarapipat, Tanakorn; Rakwatin, Preesan; Kasetkasem, Teerasit

    2014-01-07

    Rice crop height is an important agronomic trait linked to plant type and yield potential. This research developed an automatic image processing technique to detect rice crop height based on images taken by a digital camera attached to a field server. The camera acquires rice paddy images daily at a consistent time of day. The images include the rice plants and a marker bar used to provide a height reference. The rice crop height can be indirectly measured from the images by measuring the height of the marker bar compared to the height of the initial marker bar. Four digital image processing steps are employed to automatically measure the rice crop height: band selection, filtering, thresholding, and height measurement. Band selection is used to remove redundant features. Filtering extracts significant features of the marker bar. The thresholding method is applied to separate objects and boundaries of the marker bar versus other areas. The marker bar is detected and compared with the initial marker bar to measure the rice crop height. Our experiment used a field server with a digital camera to continuously monitor a rice field located in Suphanburi Province, Thailand. The experimental results show that the proposed method measures rice crop height effectively, with no human intervention required.

  10. Salish Kootenai College Student Internship With the Landsat Data Continuity Mission: A Student's Perspective

    NASA Astrophysics Data System (ADS)

    Fisher, R.

    2004-12-01

    Hello my name is Richard D. Fisher. I was very fortunate to be picked to travel to Washington DC in July 2004, to complete a five week internship at NASA. My internship project is located on the Flathead Reservation and the area is called the Jocko-Spring Creek. My project was to complete land cover classification and land cover change detection. In order for me to accomplish my goals I had to learn how to use two new computer programs, MultiSpec and ENVI (Environment for Viewing Images) for remote sensing processing. Computer use does not come easy to me because, I lack the training most people take for granted. However, I did not let this lack of training get me down. The first step was to acquire two Landsat images. The first image was from the Landsat 7, landsat satellite in 1999 and the other was from the Landsat 5 satellite in 1987. The path row for my study area is 41-27. Once the images were acquired I had to combine the different color bands to make one image and perform a blue band correction. The blue band correction takes the blue haze out of the images making them clearer. The visible bands are blue, green, red and three bands of infrared. Once these color bands are together you can change the color of the image to help you look for different features, because each different color band will show you something different. After I put the images together I used ENVI to do the land cover classifications. The next step was to subset my project area to a smaller size. I cut both images in exactly the same coordinates. With help from my NASA mentor scientist, Rich Irish from the Landsat Data Continuity Mission, and I used photo shop Adobe PhotoShop to do the subsetting of both images. We were able to then link the two images together using ENVI software. After that I started to analyze the different pixels and their colors. I classified each image starting with the areas I knew from the fieldwork. After the classifications on both images were complete and I felt confident in the final classification, I needed to work with the shadows from the mountains in the image. We performed change detection from subtracting the two images. These computer programs are fun to use and were very useful especially when combining Landsat images. I would recommend these programs to anyone.

  11. Space Radar Image of Owens Valley, California

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-dimensional perspective view of Owens Valley, near the town of Bishop, California that was created by combining two spaceborne radar images using a technique known as interferometry. Visualizations like this one are helpful to scientists because they clarify the relationships of the different types of surfaces detected by the radar and the shapes of the topographic features such as mountains and valleys. The view is looking southeast along the eastern edge of Owens Valley. The White Mountains are in the center of the image, and the Inyo Mountains loom in the background. The high peaks of the White Mountains rise more than 3,000 meters (10,000 feet) above the valley floor. The runways of the Bishop airport are visible at the right edge of the image. The meandering course of the Owens River and its tributaries appear light blue on the valley floor. Blue areas in the image are smooth, yellow areas are rock outcrops, and brown areas near the mountains are deposits of boulders, gravel and sand known as alluvial fans. The image was constructed by overlaying a color composite radar image on top of a digital elevation map. The radar data were taken by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) on board the space shuttle Endeavour in October 1994. The digital elevation map was produced using radar interferometry, a process in which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. The elevation data were derived from a 1,500-km-long (930-mile) digital topographic map processed at JPL. Radar image data are draped over the topography to provide the color with the following assignments: red is L-band vertically transmitted, vertically received; green is C-band vertically transmitted, vertically received; and blue is the ratio of C-band vertically transmitted, vertically received to L-band vertically transmitted, vertically received. This image is centered near 37.4 degrees north latitude and 118.3 degrees west longitude. No vertical exaggeration factor has been applied to the data. SIR-C/X-SAR, a joint mission of the German, Italian, and the United States space agencies, is part of NASA's Mission to Planet Earth.

  12. Narrow band imaging versus autofluorescence imaging for head and neck squamous cell carcinoma detection: a prospective study.

    PubMed

    Ni, X-G; Zhang, Q-Q; Wang, G-Q

    2016-11-01

    This study aimed to compare the diagnostic effectiveness of narrow band imaging and autofluorescence imaging for malignant laryngopharyngeal tumours. Between May 2010 and October 2010, 50 consecutive patients with suspected laryngopharyngeal tumour underwent endoscopic laryngopharynx examination. The morphological characteristics of laryngopharyngeal lesions were analysed using high performance endoscopic systems equipped with narrow band imaging and autofluorescence imaging modes. The diagnostic effectiveness of white light image, narrow band imaging and autofluorescence imaging endoscopy for benign and malignant laryngopharyngeal lesions was evaluated. Under narrow band imaging endoscopy, the superficial microvessels of squamous cell carcinomas appeared as dark brown spots or twisted cords. Under autofluorescence imaging endoscopy, malignant lesions appeared as bright purple. The sensitivity of malignant lesion diagnosis was not significantly different between narrow band imaging and autofluorescence imaging modes, but was better than for white light image endoscopy (χ2 = 12.676, p = 0.002). The diagnostic specificity was significantly better in narrow band imaging mode than in both autofluorescence imaging and white light imaging mode (χ2 = 8.333, p = 0.016). Narrow band imaging endoscopy is the best option for the diagnosis and differential diagnosis of laryngopharyngeal tumours.

  13. Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This DS consists of the locally enhanced ALOS image mosaics for each of the 24 mineral project areas (referred to herein as areas of interest), whose locality names, locations, and main mineral occurrences are shown on the index map of Afghanistan (fig. 1). ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency, but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. PRISM image orthorectification for one-half of the target areas was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using SPARKLE logic, which is described in Davis (2006). Each of the four-band images within each resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a specified radius that was usually 500 m. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (either 41 or 42) and the WGS84 datum. Most final image mosaics were subdivided into overlapping tiles or quadrants because of the large size of the target areas. The image tiles (or quadrants) for each area of interest are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Approximately one-half of the study areas have at least one subarea designated for detailed field investigations; the subareas were extracted from the area's image mosaic and are provided as separate embedded geotiff images.

  14. Space Radar Image of Kilauea, Hawaii

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Data acquired on April 13, 1994 and on October 4, 1994 from the X-band Synthetic Aperture Radar on board the space shuttle Endeavour were used to generate interferometric fringes, which were overlaid on the X-SAR image of Kilauea. The volcano is centered in this image at 19.58 degrees north latitude and 155.55 degrees west longitude. The image covers about 9 kilometers by 13 kilometers (5.6 miles by 8 miles). The X-band fringes correspond clearly to the expected topographic image. The yellow line indicates the area below which was used for the three-dimensional image using altitude lines. The yellow rectangular frame fences the area for the final topographic image. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.V.(DLR), the major partner in science, operations and data processing of X-SAR. The Instituto Ricerca Elettromagnetismo Componenti Elettronici (IRECE) at the University of Naples was a partner in interferometry analysis.

  15. VIIRS Reflective Solar Band Radiometric and Stability Evaluation Using Deep Convective Clouds

    NASA Technical Reports Server (NTRS)

    Chang, Tiejun; Xiong, Xiaoxiong; Mu, Qiaozhen

    2016-01-01

    This work takes advantage of the stable distribution of deep convective cloud (DCC) reflectance measurements to assess the calibration stability and detector difference in Visible Infrared Imaging Radiometer Suite (VIIRS) reflective bands. VIIRS Sensor Data Records (SDRs) from February 2012 to June 2015 are utilized to analyze the long-term trending, detector difference, and half angle mirror (HAM) side difference. VIIRS has two thermal emissive bands with coverage crossing 11 microns for DCC pixel identification. The comparison of the results of these two processing bands is one of the indicators of analysis reliability. The long-term stability analysis shows downward trends (up to approximately 0.4 per year) for the visible and near-infrared bands and upward trends (up to 0.5per year) for the short- and mid-wave infrared bands. The detector difference for each band is calculated as the difference relative to the average reflectance overall detectors. Except for the slightly greater than 1 difference in the two bands at 1610 nm, the detector difference is less than1 for other solar reflective bands. The detector differences show increasing trends for some short-wave bands with center wavelengths from 400 to 600 nm and remain unchanged for the bands with longer center wavelengths. The HAM side difference is insignificant and stable. Those short-wave bands from 400 to 600 nm also have relatively larger HAM side difference, up to 0.25.Comparing the striped images from SDR and the smooth images after the correction validates the analyses of detector difference and HAM side difference. These analyses are very helpful for VIIRS calibration improvement and thus enhance product quality

  16. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Kandahar mineral district in Afghanistan: Chapter Z in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kandahar mineral district, which has bauxite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2006,2007,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar- elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image- registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative- reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area- enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Kandahar) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Kandahar area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Kandahar study area, two subareas were designated for detailed field investigations (that is, the Obatu-Shela and Sekhab-Zamto Kalay subareas); these subareas were extracted from the area's image mosaic and are provided as separate embedded geotiff images.

  17. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Khanneshin mineral district in Afghanistan: Chapter A in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Khanneshin mineral district, which has uranium, thorium, rare-earth-element, and apatite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Khanneshin) and the WGS84 datum. The final image mosaics were subdivided into nine overlapping tiles or quadrants because of the large size of the target area. The nine image tiles (or quadrants) for the Khanneshin area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Khanneshin study area, one subarea was designated for detailed field investigations (that is, the Khanneshin volcano subarea); this subarea was extracted from the area's image mosaic and is provided as separate embedded geotiff images.

  18. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Farah mineral district in Afghanistan: Chapter FF in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2014-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Farah mineral district, which has spectral reflectance anomalies indicative of copper, zinc, lead, silver, and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA, 2007, 2008, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Farah) and the WGS84 datum. The final image mosaics were subdivided into four overlapping tiles or quadrants because of the large size of the target area. The four image tiles (or quadrants) for the Farah area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Farah study area, five subareas were designated for detailed field investigations (that is, the FarahA through FarahE subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.

  19. FIR filters for hardware-based real-time multi-band image blending

    NASA Astrophysics Data System (ADS)

    Popovic, Vladan; Leblebici, Yusuf

    2015-02-01

    Creating panoramic images has become a popular feature in modern smart phones, tablets, and digital cameras. A user can create a 360 degree field-of-view photograph from only several images. Quality of the resulting image is related to the number of source images, their brightness, and the used algorithm for their stitching and blending. One of the algorithms that provides excellent results in terms of background color uniformity and reduction of ghosting artifacts is the multi-band blending. The algorithm relies on decomposition of image into multiple frequency bands using dyadic filter bank. Hence, the results are also highly dependant on the used filter bank. In this paper we analyze performance of the FIR filters used for multi-band blending. We present a set of five filters that showed the best results in both literature and our experiments. The set includes Gaussian filter, biorthogonal wavelets, and custom-designed maximally flat and equiripple FIR filters. The presented results of filter comparison are based on several no-reference metrics for image quality. We conclude that 5/3 biorthogonal wavelet produces the best result in average, especially when its short length is considered. Furthermore, we propose a real-time FPGA implementation of the blending algorithm, using 2D non-separable systolic filtering scheme. Its pipeline architecture does not require hardware multipliers and it is able to achieve very high operating frequencies. The implemented system is able to process 91 fps for 1080p (1920×1080) image resolution.

  20. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Herat mineral district in Afghanistan: Chapter T in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Herat mineral district, which has barium and limestone deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 1,000-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Herat) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Herat area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Herat study area, one subarea was designated for detailed field investigations (that is, the Barium-Limestone subarea); this subarea was extracted from the area's image mosaic and is provided as separate embedded geotiff images.

  1. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Kharnak-Kanjar mineral district in Afghanistan: Chapter K in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kharnak-Kanjar mineral district, which has mercury deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 1,000-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Kharnak-Kanjar) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Kharnak-Kanjar area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Kharnak-Kanjar study area, three subareas were designated for detailed field investigations (that is, the Koh-e-Katif Passaband, Panjshah-Mullayan, and Sahebdad-Khanjar subareas); these subareas were extracted from the area's image mosaic and are provided as separate embedded geotiff images.

  2. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Haji-Gak mineral district in Afghanistan: Chapter C in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Haji-Gak mineral district, which has iron ore deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2006,2007), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then co-registered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image-coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Haji-Gak) and the WGS84 datum. The final image mosaics were subdivided into three overlapping tiles or quadrants because of the large size of the target area. The three image tiles (or quadrants) for the Haji-Gak area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Haji-Gak study area, three subareas were designated for detailed field investigations (that is, the Haji-Gak Prospect, Farenjal, and NE Haji-Gak subareas); these subareas were extracted from the area's image mosaic and are provided as separate embedded geotiff images.

  3. The Far Ultra-Violet Imager on the Icon Mission

    NASA Astrophysics Data System (ADS)

    Mende, S. B.; Frey, H. U.; Rider, K.; Chou, C.; Harris, S. E.; Siegmund, O. H. W.; England, S. L.; Wilkins, C.; Craig, W.; Immel, T. J.; Turin, P.; Darling, N.; Loicq, J.; Blain, P.; Syrstad, E.; Thompson, B.; Burt, R.; Champagne, J.; Sevilla, P.; Ellis, S.

    2017-10-01

    ICON Far UltraViolet (FUV) imager contributes to the ICON science objectives by providing remote sensing measurements of the daytime and nighttime atmosphere/ionosphere. During sunlit atmospheric conditions, ICON FUV images the limb altitude profile in the shortwave (SW) band at 135.6 nm and the longwave (LW) band at 157 nm perpendicular to the satellite motion to retrieve the atmospheric O/N2 ratio. In conditions of atmospheric darkness, ICON FUV measures the 135.6 nm recombination emission of O+ ions used to compute the nighttime ionospheric altitude distribution. ICON Far UltraViolet (FUV) imager is a Czerny-Turner design Spectrographic Imager with two exit slits and corresponding back imager cameras that produce two independent images in separate wavelength bands on two detectors. All observations will be processed as limb altitude profiles. In addition, the ionospheric 135.6 nm data will be processed as longitude and latitude spatial maps to obtain images of ion distributions around regions of equatorial spread F. The ICON FUV optic axis is pointed 20 degrees below local horizontal and has a steering mirror that allows the field of view to be steered up to 30 degrees forward and aft, to keep the local magnetic meridian in the field of view. The detectors are micro channel plate (MCP) intensified FUV tubes with the phosphor fiber-optically coupled to Charge Coupled Devices (CCDs). The dual stack MCP-s amplify the photoelectron signals to overcome the CCD noise and the rapidly scanned frames are co-added to digitally create 12-second integrated images. Digital on-board signal processing is used to compensate for geometric distortion and satellite motion and to achieve data compression. The instrument was originally aligned in visible light by using a special grating and visible cameras. Final alignment, functional and environmental testing and calibration were performed in a large vacuum chamber with a UV source. The test and calibration program showed that ICON FUV meets its design requirements and is ready to be launched on the ICON spacecraft.

  4. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the North Takhar mineral district in Afghanistan: Chapter D in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the North Takhar mineral district, which has placer gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for North Takhar) and the WGS84 datum. The final image mosaics were subdivided into nine overlapping tiles or quadrants because of the large size of the target area. The nine image tiles (or quadrants) for the North Takhar area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  5. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Baghlan mineral district in Afghanistan: Chapter P in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Baghlan mineral district, which has industrial clay and gypsum deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2006, 2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Baghlan) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the Baghlan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  6. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Uruzgan mineral district in Afghanistan: Chapter V in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Uruzgan mineral district, which has tin and tungsten deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2008, 2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Uruzgan) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Uruzgan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  7. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the South Helmand mineral district in Afghanistan: Chapter O in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the South Helmand mineral district, which has travertine deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2008, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (41 for South Helmand) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the South Helmand area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  8. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Bakhud mineral district in Afghanistan: Chapter U in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.; Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Bakhud mineral district, which has industrial fluorite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (41 for Bakhud) and the WGS84 datum. The final image mosaics were subdivided into nine overlapping tiles or quadrants because of the large size of the target area. The nine image tiles (or quadrants) for the Bakhud area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  9. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Takhar mineral district in Afghanistan: Chapter Q in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Takhar mineral district, which has industrial evaporite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Takhar) and the WGS84 datum. The final image mosaics for the Takhar area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  10. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Kunduz mineral district in Afghanistan: Chapter S in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kunduz mineral district, which has celestite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Kunduz) and the WGS84 datum. The final image mosaics were subdivided into five overlapping tiles or quadrants because of the large size of the target area. The five image tiles (or quadrants) for the Kunduz area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  11. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Dudkash mineral district in Afghanistan: Chapter R in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Dudkash mineral district, which has industrial mineral deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Dudkash) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Dudkash area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  12. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Parwan mineral district in Afghanistan: Chapter CC in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Parwan mineral district, which has gold and copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006, 2007), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Parwan) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the North Bamyan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  13. Comparison of the Cloud Morphology Spatial Structure Between Jupiter and Saturn Using JunoCam and Cassini ISS

    NASA Astrophysics Data System (ADS)

    Garland, Justin; Sayanagi, Kunio M.; Blalock, John J.; Gunnarson, Jacob; McCabe, Ryan M.; Gallego, Angelina; Hansen, Candice; Orton, Glenn S.

    2017-10-01

    We present an analysis of the spatial-scales contained in the cloud morphology of Jupiter’s southern high latitudes using images captured by JunoCam in 2016 and 2017, and compare them to those on Saturn using images captured using the Imaging Science Subsystem (ISS) on board the Cassini orbiter. For Jupiter, the characteristic spatial scale of cloud morphology as a function of latitude is calculated from images taken in three visual (600-800, 500-600, 420-520 nm) bands and a near-infrared (880- 900 nm) band. In particular, we analyze the transition from the banded structure characteristic of Jupiter’s mid-latitudes to the chaotic structure of the polar region. We apply similar analysis to Saturn using images captured using Cassini ISS. In contrast to Jupiter, Saturn maintains its zonally organized cloud morphology from low latitudes up to the poles, culminating in the cyclonic polar vortices centered at each of the poles. By quantifying the differences in the spatial scales contained in the cloud morphology, our analysis will shed light on the processes that control the banded structures on Jupiter and Saturn. Our work has been supported by the following grants: NASA PATM NNX14AK07G, NASA MUREP NNX15AQ03A, and NSF AAG 1212216.

  14. Model for mapping settlements

    DOEpatents

    Vatsavai, Ranga Raju; Graesser, Jordan B.; Bhaduri, Budhendra L.

    2016-07-05

    A programmable media includes a graphical processing unit in communication with a memory element. The graphical processing unit is configured to detect one or more settlement regions from a high resolution remote sensed image based on the execution of programming code. The graphical processing unit identifies one or more settlements through the execution of the programming code that executes a multi-instance learning algorithm that models portions of the high resolution remote sensed image. The identification is based on spectral bands transmitted by a satellite and on selected designations of the image patches.

  15. Robust image watermarking using DWT and SVD for copyright protection

    NASA Astrophysics Data System (ADS)

    Harjito, Bambang; Suryani, Esti

    2017-02-01

    The Objective of this paper is proposed a robust combined Discrete Wavelet Transform (DWT) and Singular Value Decomposition (SVD). The RGB image is called a cover medium, and watermark image is converted into gray scale. Then, they are transformed using DWT so that they can be split into several subbands, namely sub-band LL2, LH2, HL2. The watermark image embeds into the cover medium on sub-band LL2. This scheme aims to obtain the higher robustness level than the previous method which performs of SVD matrix factorization image for copyright protection. The experiment results show that the proposed method has robustness against several image processing attacks such as Gaussian, Poisson and Salt and Pepper Noise. In these attacks, noise has average Normalized Correlation (NC) values of 0.574863 0.889784, 0.889782 respectively. The watermark image can be detected and extracted.

  16. A spectral water index based on visual bands

    NASA Astrophysics Data System (ADS)

    Basaeed, Essa; Bhaskar, Harish; Al-Mualla, Mohammed

    2013-10-01

    Land-water segmentation is an important preprocessing step in a number of remote sensing applications such as target detection, environmental monitoring, and map updating. A Normalized Optical Water Index (NOWI) is proposed to accurately discriminate between land and water regions in multi-spectral satellite imagery data from DubaiSat-1. NOWI exploits the spectral characteristics of water content (using visible bands) and uses a non-linear normalization procedure that renders strong emphasize on small changes in lower brightness values whilst guaranteeing that the segmentation process remains image-independent. The NOWI representation is validated through systematic experiments, evaluated using robust metrics, and compared against various supervised classification algorithms. Analysis has indicated that NOWI has the advantages that it: a) is a pixel-based method that requires no global knowledge of the scene under investigation, b) can be easily implemented in parallel processing, c) is image-independent and requires no training, d) works in different environmental conditions, e) provides high accuracy and efficiency, and f) works directly on the input image without any form of pre-processing.

  17. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Nuristan mineral district in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.; Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Nuristan mineral district, which has gem, lithium, and cesium deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. All available panchromatic images for this area had significant cloud and snow cover that precluded their use for resolution enhancement of the multispectral image data. Each of the four-band images within the 10-m image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Nuristan) and the WGS84 datum. The final image mosaics for the Nuristan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.

  18. Kinetic energy dependence of carrier diffusion in a GaAs epilayer studied by wavelength selective PL imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S.; Su, L. Q.; Kon, J.

    Photoluminescence (PL) imaging has been shown to be an efficient technique for investigating carrier diffusion in semiconductors. In the past, the measurement was typically carried out by measuring at one wavelength (e.g., at the band gap) or simply the whole emission band. At room temperature in a semiconductor like GaAs, the band-to-band PL emission may occur in a spectral range over 200 meV, vastly exceeding the average thermal energy of about 26 meV. To investigate the potential dependence of the carrier diffusion on the carrier kinetic energy, we performed wavelength selective PL imaging on a GaAs double hetero-structure in amore » spectral range from about 70 meV above to 50 meV below the bandgap, extracting the carrier diffusion lengths at different PL wavelengths by fitting the imaging data to a theoretical model. The results clearly show that the locally generated carriers of different kinetic energies mostly diffuse together, maintaining the same thermal distribution throughout the diffusion process. Potential effects related to carrier density, self-absorption, lateral wave-guiding, and local heating are also discussed.« less

  19. Utilization of a balanced steady state free precession signal model for improved fat/water decomposition.

    PubMed

    Henze Bancroft, Leah C; Strigel, Roberta M; Hernando, Diego; Johnson, Kevin M; Kelcz, Frederick; Kijowski, Richard; Block, Walter F

    2016-03-01

    Chemical shift based fat/water decomposition methods such as IDEAL are frequently used in challenging imaging environments with large B0 inhomogeneity. However, they do not account for the signal modulations introduced by a balanced steady state free precession (bSSFP) acquisition. Here we demonstrate improved performance when the bSSFP frequency response is properly incorporated into the multipeak spectral fat model used in the decomposition process. Balanced SSFP allows for rapid imaging but also introduces a characteristic frequency response featuring periodic nulls and pass bands. Fat spectral components in adjacent pass bands will experience bulk phase offsets and magnitude modulations that change the expected constructive and destructive interference between the fat spectral components. A bSSFP signal model was incorporated into the fat/water decomposition process and used to generate images of a fat phantom, and bilateral breast and knee images in four normal volunteers at 1.5 Tesla. Incorporation of the bSSFP signal model into the decomposition process improved the performance of the fat/water decomposition. Incorporation of this model allows rapid bSSFP imaging sequences to use robust fat/water decomposition methods such as IDEAL. While only one set of imaging parameters were presented, the method is compatible with any field strength or repetition time. © 2015 Wiley Periodicals, Inc.

  20. Image Fusion Algorithms Using Human Visual System in Transform Domain

    NASA Astrophysics Data System (ADS)

    Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar

    2017-08-01

    The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.

  1. Multispectral Photography

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Model II Multispectral Camera is an advanced aerial camera that provides optimum enhancement of a scene by recording spectral signatures of ground objects only in narrow, preselected bands of the electromagnetic spectrum. Its photos have applications in such areas as agriculture, forestry, water pollution investigations, soil analysis, geologic exploration, water depth studies and camouflage detection. The target scene is simultaneously photographed in four separate spectral bands. Using a multispectral viewer, such as their Model 75 Spectral Data creates a color image from the black and white positives taken by the camera. With this optical image analysis unit, all four bands are superimposed in accurate registration and illuminated with combinations of blue green, red, and white light. Best color combination for displaying the target object is selected and printed. Spectral Data Corporation produces several types of remote sensing equipment and also provides aerial survey, image processing and analysis and number of other remote sensing services.

  2. Monitoring the long term stability of the IRS-P6 AWiFS sensor using the Sonoran and RVPN sites

    NASA Astrophysics Data System (ADS)

    Chander, Gyanesh; Sampath, Aparajithan; Angal, Amit; Choi, Taeyoung; Xiong, Xiaoxiong

    2010-10-01

    This paper focuses on radiometric and geometric assessment of the Indian Remote Sensing (IRS-P6) Advanced Wide Field Sensor (AWiFS) sensor using the Sonoran desert and Railroad Valley Playa, Nevada (RVPN) ground sites. Imageto- Image (I2I) accuracy and relative band-to-band (B2B) accuracy were measured. I2I accuracy of the AWiFS imagery was assessed by measuring the imagery against Landsat Global Land Survey (GLS) 2000. The AWiFS images were typically registered to within one pixel to the GLS 2000 mosaic images. The B2B process used the same concepts as the I2I, except instead of a reference image and a search image; the individual bands of a multispectral image are tested against each other. The B2B results showed that all the AWiFS multispectral bands are registered to sub-pixel accuracy. Using the limited amount of scenes available over these ground sites, the reflective bands of AWiFS sensor indicate a long-term drift in the top-of-atmosphere (TOA) reflectance. Because of the limited availability of AWiFS scenes over these ground sites, a comprehensive evaluation of the radiometric stability using these sites is not possible. In order to overcome this limitation, a cross-comparison between AWiFS and Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) was performed using image statistics based on large common areas observed by the sensors within 30 minutes. Regression curves and coefficients of determination for the TOA trends from these sensors were generated to quantify the uncertainty in these relationships and to provide an assessment of the calibration differences between these sensors.

  3. Automated Recognition of Vegetation and Water Bodies on the Territory of Megacities in Satellite Images of Visible and IR Bands

    NASA Astrophysics Data System (ADS)

    Mozgovoy, Dmitry k.; Hnatushenko, Volodymyr V.; Vasyliev, Volodymyr V.

    2018-04-01

    Vegetation and water bodies are a fundamental element of urban ecosystems, and water mapping is critical for urban and landscape planning and management. A methodology of automated recognition of vegetation and water bodies on the territory of megacities in satellite images of sub-meter spatial resolution of the visible and IR bands is proposed. By processing multispectral images from the satellite SuperView-1A, vector layers of recognized plant and water objects were obtained. Analysis of the results of image processing showed a sufficiently high accuracy of the delineation of the boundaries of recognized objects and a good separation of classes. The developed methodology provides a significant increase of the efficiency and reliability of updating maps of large cities while reducing financial costs. Due to the high degree of automation, the proposed methodology can be implemented in the form of a geo-information web service functioning in the interests of a wide range of public services and commercial institutions.

  4. Landsat-8 Operational Land Imager (OLI) radiometric performance on-orbit

    USGS Publications Warehouse

    Morfitt, Ron; Barsi, Julia A.; Levy, Raviv; Markham, Brian L.; Micijevic, Esad; Ong, Lawrence; Scaramuzza, Pat; Vanderwerff, Kelly

    2015-01-01

    Expectations of the Operational Land Imager (OLI) radiometric performance onboard Landsat-8 have been met or exceeded. The calibration activities that occurred prior to launch provided calibration parameters that enabled ground processing to produce imagery that met most requirements when data were transmitted to the ground. Since launch, calibration updates have improved the image quality even more, so that all requirements are met. These updates range from detector gain coefficients to reduce striping and banding to alignment parameters to improve the geometric accuracy. This paper concentrates on the on-orbit radiometric performance of the OLI, excepting the radiometric calibration performance. Topics discussed in this paper include: signal-to-noise ratios that are an order of magnitude higher than previous Landsat missions; radiometric uniformity that shows little residual banding and striping, and continues to improve; a dynamic range that limits saturation to extremely high radiance levels; extremely stable detectors; slight nonlinearity that is corrected in ground processing; detectors that are stable and 100% operable; and few image artifacts.

  5. Natural-color and color-infrared image mosaics of the Colorado River corridor in Arizona derived from the May 2009 airborne image collection

    USGS Publications Warehouse

    Davis, Philip A.

    2013-01-01

    The Grand Canyon Monitoring and Research Center (GCMRC) of the U.S. Geological Survey (USGS) periodically collects airborne image data for the Colorado River corridor within Arizona (fig. 1) to allow scientists to study the impacts of Glen Canyon Dam water release on the corridor’s natural and cultural resources. These data are collected from just above Glen Canyon Dam (in Lake Powell) down to the entrance of Lake Mead, for a total distance of 450 kilometers (km) and within a 500-meter (m) swath centered on the river’s mainstem and its seven main tributaries (fig. 1). The most recent airborne data collection in 2009 acquired image data in four wavelength bands (blue, green, red, and near infrared) at a spatial resolution of 20 centimeters (cm). The image collection used the latest model of the Leica ADS40 airborne digital sensor (the SH52), which uses a single optic for all four bands and collects and stores band radiance in 12-bits. Davis (2012) reported on the performance of the SH52 sensor and on the processing steps required to produce the nearly flawless four-band image mosaic (sectioned into map tiles) for the river corridor. The final image mosaic has a total of only 3 km of surface defects in addition to some areas of cloud shadow because of persistent inclement weather during data collection. The 2009 four-band image mosaic is perhaps the best image dataset that exists for the entire Arizona part of the Colorado River. Some analyses of these image mosaics do not require the full 12-bit dynamic range or all four bands of the calibrated image database, in which atmospheric scattering (or haze) had not been removed from the four bands. To provide scientists and the general public with image products that are more useful for visual interpretation, the 12-bit image data were converted to 8-bit natural-color and color-infrared images, which also removed atmospheric scattering within each wavelength-band image. The conversion required an evaluation of the histograms of each band’s digital-number population within each map tile throughout the corridor and the determination of the digital numbers corresponding to the lower and upper one percent of the picture-element population within each map tile. Visual examination of the image tiles that were given a 1-percent stretch (whereby the lower 1- percent 12-bit digital number is assigned an 8-bit value of zero and the upper 1-percent 12-bit digital number is assigned an 8-bit value of 255) indicated that this stretch sufficiently removed atmospheric scattering, which provided improved image clarity and true natural colors for all surface materials. The lower and upper 1-percent, 12-bit digital numbers for each wavelength-band image in the image tiles exhibit erratic variations along the river corridor; the variations exhibited similar trends in both the lower and upper 1-percent digital numbers for all four wavelength-band images (figs. 2–5). The erratic variations are attributed to (1) daily variations in atmospheric water-vapor content due to monsoonal storms, (2) variations in channel water color due to variable sediment input from tributaries, and (3) variations in the amount of topographic shadows within each image tile, in which reflectance is dominated by atmospheric scattering. To make the surface colors of the stretched, 8-bit images consistent among adjacent image tiles, it was necessary to average both the lower and upper 1-percent digital values for each wavelength-band image over 20 river miles to subdue the erratic variations. The average lower and upper 1-percent digital numbers for each image tile (figs. 2–5) were used to convert the 12-bit image values to 8-bit values and the resulting 8-bit four-band images were stored as natural-color (red, green, and blue wavelength bands) and color-infrared (near-infrared, red, and green wavelength bands) images in embedded geotiff format, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. All image data are projected in the State Plane (SP) map projection using the central Arizona zone (202) and the North American Datum of 1983 (NAD83). The map-tile scheme used to segment the corridor image mosaic followed the standard USGS quarter-quadrangle (QQ) map borders, but the high resolution (20 cm) of the images required further quarter segmentation (QQQ) of the standard QQ tiles, where the image mosaic covered a large fraction of a QQ map tile (segmentation shown in (figure 6), where QQ_1 to QQ_4 shows the number convention used to designate a quarter of a QQ tile). To minimize the size of each image tile, each image or map tile was subset to only include that part of the tile that had image data. In addition, some QQQ image tiles within a QQ tile were combined when adjacent QQQ map tiles were small. Thus, some image tiles consist of combinations of QQQ map tiles, some consist of an entire QQ map tile, and some consist of two adjoining QQ map tiles. The final image tiles number 143, which is a large number of files to list on the Internet for both the natural-color and color-infrared images. Thus, the image tiles were placed in seven file folders based on the one-half-degree geographic boundaries within the study area (fig. 7). The map tiles in each file folder were compressed to minimize folder size for more efficient downloading. The file folders are sequentially referred to as zone 1 through zone 7, proceeding down river (fig. 7). The QQ designations of the image tiles contained within each folder or zone are shown on the index map for each respective zone (figs. 8–14).

  6. Land use/land cover mapping using multi-scale texture processing of high resolution data

    NASA Astrophysics Data System (ADS)

    Wong, S. N.; Sarker, M. L. R.

    2014-02-01

    Land use/land cover (LULC) maps are useful for many purposes, and for a long time remote sensing techniques have been used for LULC mapping using different types of data and image processing techniques. In this research, high resolution satellite data from IKONOS was used to perform land use/land cover mapping in Johor Bahru city and adjacent areas (Malaysia). Spatial image processing was carried out using the six texture algorithms (mean, variance, contrast, homogeneity, entropy, and GLDV angular second moment) with five difference window sizes (from 3×3 to 11×11). Three different classifiers i.e. Maximum Likelihood Classifier (MLC), Artificial Neural Network (ANN) and Supported Vector Machine (SVM) were used to classify the texture parameters of different spectral bands individually and all bands together using the same training and validation samples. Results indicated that texture parameters of all bands together generally showed a better performance (overall accuracy = 90.10%) for land LULC mapping, however, single spectral band could only achieve an overall accuracy of 72.67%. This research also found an improvement of the overall accuracy (OA) using single-texture multi-scales approach (OA = 89.10%) and single-scale multi-textures approach (OA = 90.10%) compared with all original bands (OA = 84.02%) because of the complementary information from different bands and different texture algorithms. On the other hand, all of the three different classifiers have showed high accuracy when using different texture approaches, but SVM generally showed higher accuracy (90.10%) compared to MLC (89.10%) and ANN (89.67%) especially for the complex classes such as urban and road.

  7. Can we match ultraviolet face images against their visible counterparts?

    NASA Astrophysics Data System (ADS)

    Narang, Neeru; Bourlai, Thirimachos; Hornak, Lawrence A.

    2015-05-01

    In law enforcement and security applications, the acquisition of face images is critical in producing key trace evidence for the successful identification of potential threats. However, face recognition (FR) for face images captured using different camera sensors, and under variable illumination conditions, and expressions is very challenging. In this paper, we investigate the advantages and limitations of the heterogeneous problem of matching ultra violet (from 100 nm to 400 nm in wavelength) or UV, face images against their visible (VIS) counterparts, when all face images are captured under controlled conditions. The contributions of our work are three-fold; (i) We used a camera sensor designed with the capability to acquire UV images at short-ranges, and generated a dual-band (VIS and UV) database that is composed of multiple, full frontal, face images of 50 subjects. Two sessions were collected that span over the period of 2 months. (ii) For each dataset, we determined which set of face image pre-processing algorithms are more suitable for face matching, and, finally, (iii) we determined which FR algorithm better matches cross-band face images, resulting in high rank-1 identification rates. Experimental results show that our cross spectral matching (the heterogeneous problem, where gallery and probe sets consist of face images acquired in different spectral bands) algorithms achieve sufficient identification performance. However, we also conclude that the problem under study, is very challenging, and it requires further investigation to address real-world law enforcement or military applications. To the best of our knowledge, this is first time in the open literature the problem of cross-spectral matching of UV against VIS band face images is being investigated.

  8. Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain.

    PubMed

    Ganasala, Padma; Kumar, Vinod

    2016-02-01

    Multimodality medical image fusion plays a vital role in diagnosis, treatment planning, and follow-up studies of various diseases. It provides a composite image containing critical information of source images required for better localization and definition of different organs and lesions. In the state-of-the-art image fusion methods based on nonsubsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN), authors have used normalized coefficient value to motivate the PCNN-processing both low-frequency (LF) and high-frequency (HF) sub-bands. This makes the fused image blurred and decreases its contrast. The main objective of this work is to design an image fusion method that gives the fused image with better contrast, more detail information, and suitable for clinical use. We propose a novel image fusion method utilizing feature-motivated adaptive PCNN in NSST domain for fusion of anatomical images. The basic PCNN model is simplified, and adaptive-linking strength is used. Different features are used to motivate the PCNN-processing LF and HF sub-bands. The proposed method is extended for fusion of functional image with an anatomical image in improved nonlinear intensity hue and saturation (INIHS) color model. Extensive fusion experiments have been performed on CT-MRI and SPECT-MRI datasets. Visual and quantitative analysis of experimental results proved that the proposed method provides satisfactory fusion outcome compared to other image fusion methods.

  9. Disordered high-frequency oscillation in face processing in schizophrenia patients

    PubMed Central

    Liu, Miaomiao; Pei, Guangying; Peng, Yinuo; Wang, Changming; Yan, Tianyi; Wu, Jinglong

    2018-01-01

    Abstract Schizophrenia is a complex disorder characterized by marked social dysfunctions, but the neural mechanism underlying this deficit is unknown. To investigate whether face-specific perceptual processes are influenced in schizophrenia patients, both face detection and configural analysis were assessed in normal individuals and schizophrenia patients by recording electroencephalogram (EEG) data. Here, a face processing model was built based on the frequency oscillations, and the evoked power (theta, alpha, and beta bands) and the induced power (gamma bands) were recorded while the subjects passively viewed face and nonface images presented in upright and inverted orientations. The healthy adults showed a significant face-specific effect in the alpha, beta, and gamma bands, and an inversion effect was observed in the gamma band in the occipital lobe and right temporal lobe. Importantly, the schizophrenia patients showed face-specific deficits in the low-frequency beta and gamma bands, and the face inversion effect in the gamma band was absent from the occipital lobe. All these results revealed face-specific processing in patients due to the disorder of high-frequency EEG, providing additional evidence to enrich future studies investigating neural mechanisms and serving as a marked diagnostic basis. PMID:29419668

  10. a Band Selection Method for High Precision Registration of Hyperspectral Image

    NASA Astrophysics Data System (ADS)

    Yang, H.; Li, X.

    2018-04-01

    During the registration of hyperspectral images and high spatial resolution images, too much bands in a hyperspectral image make it difficult to select bands with good registration performance. Terrible bands are possible to reduce matching speed and accuracy. To solve this problem, an algorithm based on Cram'er-Rao lower bound theory is proposed to select good matching bands in this paper. The algorithm applies the Cram'er-Rao lower bound theory to the study of registration accuracy, and selects good matching bands by CRLB parameters. Experiments show that the algorithm in this paper can choose good matching bands and provide better data for the registration of hyperspectral image and high spatial resolution image.

  11. Evaluation and testing of image quality of the Space Solar Extreme Ultraviolet Telescope

    NASA Astrophysics Data System (ADS)

    Peng, Jilong; Yi, Zhong; Zhou, Shuhong; Yu, Qian; Hou, Yinlong; Wang, Shanshan

    2018-01-01

    For the space solar extreme ultraviolet telescope, the star point test can not be performed in the x-ray band (19.5nm band) as there is not light source of bright enough. In this paper, the point spread function of the optical system is calculated to evaluate the imaging performance of the telescope system. Combined with the actual processing surface error, such as small grinding head processing and magnetorheological processing, the optical design software Zemax and data analysis software Matlab are used to directly calculate the system point spread function of the space solar extreme ultraviolet telescope. Matlab codes are programmed to generate the required surface error grid data. These surface error data is loaded to the specified surface of the telescope system by using the communication technique of DDE (Dynamic Data Exchange), which is used to connect Zemax and Matlab. As the different processing methods will lead to surface error with different size, distribution and spatial frequency, the impact of imaging is also different. Therefore, the characteristics of the surface error of different machining methods are studied. Combining with its position in the optical system and simulation its influence on the image quality, it is of great significance to reasonably choose the processing technology. Additionally, we have also analyzed the relationship between the surface error and the image quality evaluation. In order to ensure the final processing of the mirror to meet the requirements of the image quality, we should choose one or several methods to evaluate the surface error according to the different spatial frequency characteristics of the surface error.

  12. Spatial land-use inventory, modeling, and projection/Denver metropolitan area, with inputs from existing maps, airphotos, and LANDSAT imagery

    NASA Technical Reports Server (NTRS)

    Tom, C.; Miller, L. D.; Christenson, J. W.

    1978-01-01

    A landscape model was constructed with 34 land-use, physiographic, socioeconomic, and transportation maps. A simple Markov land-use trend model was constructed from observed rates of change and nonchange from photointerpreted 1963 and 1970 airphotos. Seven multivariate land-use projection models predicting 1970 spatial land-use changes achieved accuracies from 42 to 57 percent. A final modeling strategy was designed, which combines both Markov trend and multivariate spatial projection processes. Landsat-1 image preprocessing included geometric rectification/resampling, spectral-band, and band/insolation ratioing operations. A new, systematic grid-sampled point training-set approach proved to be useful when tested on the four orginal MSS bands, ten image bands and ratios, and all 48 image and map variables (less land use). Ten variable accuracy was raised over 15 percentage points from 38.4 to 53.9 percent, with the use of the 31 ancillary variables. A land-use classification map was produced with an optimal ten-channel subset of four image bands and six ancillary map variables. Point-by-point verification of 331,776 points against a 1972/1973 U.S. Geological Survey (UGSG) land-use map prepared with airphotos and the same classification scheme showed average first-, second-, and third-order accuracies of 76.3, 58.4, and 33.0 percent, respectively.

  13. Parallel ICA and its hardware implementation in hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Du, Hongtao; Qi, Hairong; Peterson, Gregory D.

    2004-04-01

    Advances in hyperspectral images have dramatically boosted remote sensing applications by providing abundant information using hundreds of contiguous spectral bands. However, the high volume of information also results in excessive computation burden. Since most materials have specific characteristics only at certain bands, a lot of these information is redundant. This property of hyperspectral images has motivated many researchers to study various dimensionality reduction algorithms, including Projection Pursuit (PP), Principal Component Analysis (PCA), wavelet transform, and Independent Component Analysis (ICA), where ICA is one of the most popular techniques. It searches for a linear or nonlinear transformation which minimizes the statistical dependence between spectral bands. Through this process, ICA can eliminate superfluous but retain practical information given only the observations of hyperspectral images. One hurdle of applying ICA in hyperspectral image (HSI) analysis, however, is its long computation time, especially for high volume hyperspectral data sets. Even the most efficient method, FastICA, is a very time-consuming process. In this paper, we present a parallel ICA (pICA) algorithm derived from FastICA. During the unmixing process, pICA divides the estimation of weight matrix into sub-processes which can be conducted in parallel on multiple processors. The decorrelation process is decomposed into the internal decorrelation and the external decorrelation, which perform weight vector decorrelations within individual processors and between cooperative processors, respectively. In order to further improve the performance of pICA, we seek hardware solutions in the implementation of pICA. Until now, there are very few hardware designs for ICA-related processes due to the complicated and iterant computation. This paper discusses capacity limitation of FPGA implementations for pICA in HSI analysis. A synthesis of Application-Specific Integrated Circuit (ASIC) is designed for pICA-based dimensionality reduction in HSI analysis. The pICA design is implemented using standard-height cells and aimed at TSMC 0.18 micron process. During the synthesis procedure, three ICA-related reconfigurable components are developed for the reuse and retargeting purpose. Preliminary results show that the standard-height cell based ASIC synthesis provide an effective solution for pICA and ICA-related processes in HSI analysis.

  14. Landsat 8 on-orbit characterization and calibration system

    USGS Publications Warehouse

    Micijevic, Esad; Morfitt, Ron; Choate, Michael J.

    2011-01-01

    The Landsat Data Continuity Mission (LDCM) is planning to launch the Landsat 8 satellite in December 2012, which continues an uninterrupted record of consistently calibrated globally acquired multispectral images of the Earth started in 1972. The satellite will carry two imaging sensors: the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS). The OLI will provide visible, near-infrared and short-wave infrared data in nine spectral bands while the TIRS will acquire thermal infrared data in two bands. Both sensors have a pushbroom design and consequently, each has a large number of detectors to be characterized. Image and calibration data downlinked from the satellite will be processed by the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center using the Landsat 8 Image Assessment System (IAS), a component of the Ground System. In addition to extracting statistics from all Earth images acquired, the IAS will process and trend results from analysis of special calibration acquisitions, such as solar diffuser, lunar, shutter, night, lamp and blackbody data, and preselected calibration sites. The trended data will be systematically processed and analyzed, and calibration and characterization parameters will be updated using both automatic and customized manual tools. This paper describes the analysis tools and the system developed to monitor and characterize on-orbit performance and calibrate the Landsat 8 sensors and image data products.

  15. Improvement of passive THz camera images

    NASA Astrophysics Data System (ADS)

    Kowalski, Marcin; Piszczek, Marek; Palka, Norbert; Szustakowski, Mieczyslaw

    2012-10-01

    Terahertz technology is one of emerging technologies that has a potential to change our life. There are a lot of attractive applications in fields like security, astronomy, biology and medicine. Until recent years, terahertz (THz) waves were an undiscovered, or most importantly, an unexploited area of electromagnetic spectrum. The reasons of this fact were difficulties in generation and detection of THz waves. Recent advances in hardware technology have started to open up the field to new applications such as THz imaging. The THz waves can penetrate through various materials. However, automated processing of THz images can be challenging. The THz frequency band is specially suited for clothes penetration because this radiation does not point any harmful ionizing effects thus it is safe for human beings. Strong technology development in this band have sparked with few interesting devices. Even if the development of THz cameras is an emerging topic, commercially available passive cameras still offer images of poor quality mainly because of its low resolution and low detectors sensitivity. Therefore, THz image processing is very challenging and urgent topic. Digital THz image processing is a really promising and cost-effective way for demanding security and defense applications. In the article we demonstrate the results of image quality enhancement and image fusion of images captured by a commercially available passive THz camera by means of various combined methods. Our research is focused on dangerous objects detection - guns, knives and bombs hidden under some popular types of clothing.

  16. Programmable hyperspectral image mapper with on-array processing

    NASA Technical Reports Server (NTRS)

    Cutts, James A. (Inventor)

    1995-01-01

    A hyperspectral imager includes a focal plane having an array of spaced image recording pixels receiving light from a scene moving relative to the focal plane in a longitudinal direction, the recording pixels being transportable at a controllable rate in the focal plane in the longitudinal direction, an electronic shutter for adjusting an exposure time of the focal plane, whereby recording pixels in an active area of the focal plane are removed therefrom and stored upon expiration of the exposure time, an electronic spectral filter for selecting a spectral band of light received by the focal plane from the scene during each exposure time and an electronic controller connected to the focal plane, to the electronic shutter and to the electronic spectral filter for controlling (1) the controllable rate at which the recording is transported in the longitudinal direction, (2) the exposure time, and (3) the spectral band so as to record a selected portion of the scene through M spectral bands with a respective exposure time t(sub q) for each respective spectral band q.

  17. THE SLOAN DIGITAL SKY SURVEY STRIPE 82 IMAGING DATA: DEPTH-OPTIMIZED CO-ADDS OVER 300 deg{sup 2} IN FIVE FILTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Linhua; Fan, Xiaohui; McGreer, Ian D.

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ∼300 deg{sup 2} on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. Themore » depths of the co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ∼1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ∼90 deg{sup 2} of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources)« less

  18. The Sloan Digital Sky Survey Stripe 82 Imaging Data: Depth-Optimized Co-adds Over 300 deg$^2$ in Five Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Linhua; Fan, Xiaohui; Bian, Fuyan

    We present and release co-added images of the Sloan Digital Sky Survey (SDSS) Stripe 82. Stripe 82 covers an area of ~300 deg(2) on the celestial equator, and has been repeatedly scanned 70-90 times in the ugriz bands by the SDSS imaging survey. By making use of all available data in the SDSS archive, our co-added images are optimized for depth. Input single-epoch frames were properly processed and weighted based on seeing, sky transparency, and background noise before co-addition. The resultant products are co-added science images and their associated weight images that record relative weights at individual pixels. The depths of themore » co-adds, measured as the 5σ detection limits of the aperture (3.''2 diameter) magnitudes for point sources, are roughly 23.9, 25.1, 24.6, 24.1, and 22.8 AB magnitudes in the five bands, respectively. They are 1.9-2.2 mag deeper than the best SDSS single-epoch data. The co-added images have good image quality, with an average point-spread function FWHM of ~1'' in the r, i, and z bands. We also release object catalogs that were made with SExtractor. These co-added products have many potential uses for studies of galaxies, quasars, and Galactic structure. We further present and release near-IR J-band images that cover ~90 deg(2) of Stripe 82. These images were obtained using the NEWFIRM camera on the NOAO 4 m Mayall telescope, and have a depth of about 20.0-20.5 Vega magnitudes (also 5σ detection limits for point sources).« less

  19. Electrophysiological spatiotemporal dynamics during implicit visual threat processing.

    PubMed

    DeLaRosa, Bambi L; Spence, Jeffrey S; Shakal, Scott K M; Motes, Michael A; Calley, Clifford S; Calley, Virginia I; Hart, John; Kraut, Michael A

    2014-11-01

    Numerous studies have found evidence for corticolimbic theta band electroencephalographic (EEG) oscillations in the neural processing of visual stimuli perceived as threatening. However, varying temporal and topographical patterns have emerged, possibly due to varying arousal levels of the stimuli. In addition, recent studies suggest neural oscillations in delta, theta, alpha, and beta-band frequencies play a functional role in information processing in the brain. This study implemented a data-driven PCA based analysis investigating the spatiotemporal dynamics of electroencephalographic delta, theta, alpha, and beta-band frequencies during an implicit visual threat processing task. While controlling for the arousal dimension (the intensity of emotional activation), we found several spatial and temporal differences for threatening compared to nonthreatening visual images. We detected an early posterior increase in theta power followed by a later frontal increase in theta power, greatest for the threatening condition. There was also a consistent left lateralized beta desynchronization for the threatening condition. Our results provide support for a dynamic corticolimbic network, with theta and beta band activity indexing processes pivotal in visual threat processing. Published by Elsevier Inc.

  20. GeoSAR: A Radar Terrain Mapping System for the New Millennium

    NASA Technical Reports Server (NTRS)

    Thompson, Thomas; vanZyl, Jakob; Hensley, Scott; Reis, James; Munjy, Riadh; Burton, John; Yoha, Robert

    2000-01-01

    GeoSAR Geographic Synthetic Aperture Radar) is a new 3 year effort to build a unique, dual-frequency, airborne Interferometric SAR for mapping of terrain. This is being pursued via a Consortium of the Jet Propulsion Laboratory (JPL), Calgis, Inc., and the California Department of Conservation. The airborne portion of this system will operate on a Calgis Gulfstream-II aircraft outfitted with P- and X-band Interferometric SARs. The ground portions of this system will be a suite of Flight Planning Software, an IFSAR Processor and a Radar-GIS Workstation. The airborne P-band and X-band radars will be constructed by JPL with the goal of obtaining foliage penetration at the longer P-band wavelengths. The P-band and X-band radar will operate at frequencies of 350 Mhz and 9.71 Ghz with bandwidths of either 80 or 160 Mhz. The airborne radars will be complemented with airborne laser system for measuring antenna positions. Aircraft flight lines and radar operating instructions will be computed with the Flight Planning Software The ground processing will be a two-step step process. First, the raw radar data will be processed into radar images and interferometer derived Digital Elevation Models (DEMs). Second, these radar images and DEMs will be processed with a Radar GIS Workstation which performs processes such as Projection Transformations, Registration, Geometric Adjustment, Mosaicking, Merging and Database Management. JPL will construct the IFSAR Processor and Calgis, Inc. will construct the Radar GIS Workstation. The GeoSAR Project was underway in November 1996 with a goal of having the radars and laser systems fully integrated onto the Calgis Gulfstream-II aircraft in early 1999. Then, Engineering Checkout and Calibration-Characterization Flights will be conducted through November 1999. The system will be completed at the end of 1999 and ready for routine operations in the year 2000.

  1. Space Radar Image of Flevoland, Netherlands

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-frequency false color image of Flevoland, The Netherlands, centered at 52.4 degrees north latitude, 5.4 degrees east longitude. This image was acquired by the Spaceborne Imaging Radar-C and X-Band Synthetic Aperture Radar (SIR-C/X-SAR) aboard space shuttle Endeavour on April 14, 1994. It was produced by combining data from the X-band, C-band and L-band radars. The area shown is approximately 25 kilometers by 28 kilometers (15-1/2 by 17-1/2 miles). Flevoland, which fills the lower two-thirds of the image, is a very flat area that is made up of reclaimed land that is used for agriculture and forestry. At the top of the image, across the canal from Flevoland, is an older forest shown in red; the city of Harderwijk is shown in white on the shore of the canal. At this time of the year, the agricultural fields are bare soil, and they show up in this image in blue. The changes in the brightness of the blue areas are equal to the changes in roughness. The dark blue areas are water and the small dots in the canal are boats. This SIR-C/X-SAR supersite is being used for both calibration and agricultural studies. Several soil and crop ground-truth studies will be conducted during the shuttle flight. In addition, about 10calibration devices and 10 corner reflectors have been deployed to calibrate and monitor the radar signal. One of these transponders can be seen as a bright star in the lower right quadrant of the image. This false-color image was made using L-band total power in the red channel, C-band total power in the green channel, and X-band VV polarization in the blue channel. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrte.v. (DLR), the major partner in science, operations and data processing of X-SAR.

  2. Applications of spectral band adjustment factors (SBAF) for cross-calibration

    USGS Publications Warehouse

    Chander, Gyanesh

    2013-01-01

    To monitor land surface processes over a wide range of temporal and spatial scales, it is critical to have coordinated observations of the Earth's surface acquired from multiple spaceborne imaging sensors. However, an integrated global observation framework requires an understanding of how land surface processes are seen differently by various sensors. This is particularly true for sensors acquiring data in spectral bands whose relative spectral responses (RSRs) are not similar and thus may produce different results while observing the same target. The intrinsic offsets between two sensors caused by RSR mismatches can be compensated by using a spectral band adjustment factor (SBAF), which takes into account the spectral profile of the target and the RSR of the two sensors. The motivation of this work comes from the need to compensate the spectral response differences of multispectral sensors in order to provide a more accurate cross-calibration between the sensors. In this paper, radiometric cross-calibration of the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and the Terra Moderate Resolution Imaging Spectroradiometer (MODIS) sensors was performed using near-simultaneous observations over the Libya 4 pseudoinvariant calibration site in the visible and near-infrared spectral range. The RSR differences of the analogous ETM+ and MODIS spectral bands provide the opportunity to explore, understand, quantify, and compensate for the measurement differences between these two sensors. The cross-calibration was initially performed by comparing the top-of-atmosphere (TOA) reflectances between the two sensors over their lifetimes. The average percent differences in the long-term trends ranged from $-$5% to $+$6%. The RSR compensated ETM+ TOA reflectance (ETM+$^{ast}$) measurements were then found to agree with MODIS TOA reflectance to within 5% for all bands when Earth Observing-1 Hy- erion hyperspectral data were used to produce the SBAFs. These differences were later reduced to within 1% for all bands (except band 2) by using Environmental Satellite Scanning Imaging Absorption Spectrometer for Atmospheric Cartography hyperspectral data to produce the SBAFs.

  3. Multispectral remote sensing from unmanned aircraft: image processing workflows and applications for rangeland environments

    USDA-ARS?s Scientific Manuscript database

    Using unmanned aircraft systems (UAS) as remote sensing platforms offers the unique ability for repeated deployment for acquisition of high temporal resolution data at very high spatial resolution. Most image acquisitions from UAS have been in the visible bands, while multispectral remote sensing ap...

  4. Vector coding of wavelet-transformed images

    NASA Astrophysics Data System (ADS)

    Zhou, Jun; Zhi, Cheng; Zhou, Yuanhua

    1998-09-01

    Wavelet, as a brand new tool in signal processing, has got broad recognition. Using wavelet transform, we can get octave divided frequency band with specific orientation which combines well with the properties of Human Visual System. In this paper, we discuss the classified vector quantization method for multiresolution represented image.

  5. Sharpending of the Vnir and SWIR Bands of the Wide Band Spectral Imager Onboard Tiangong-II Imagery Using the Selected Bands

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Li, X.; Liu, G.; Huang, C.; Li, H.; Guan, X.

    2018-04-01

    The Tiangong-II space lab was launched at the Jiuquan Satellite Launch Center of China on September 15, 2016. The Wide Band Spectral Imager (WBSI) onboard the Tiangong-II has 14 visible and near-infrared (VNIR) spectral bands covering the range from 403-990 nm and two shortwave infrared (SWIR) bands covering the range from 1230-1250 nm and 1628-1652 nm respectively. In this paper the selected bands are proposed which aims at considering the closest spectral similarities between the VNIR with 100 m spatial resolution and SWIR bands with 200 m spatial resolution. The evaluation of Gram-Schmidt transform (GS) sharpening techniques embedded in ENVI software is presented based on four types of the different low resolution pan band. The experimental results indicated that the VNIR band with higher CC value with the raw SWIR Band was selected, more texture information was injected the corresponding sharpened SWIR band image, and at that time another sharpened SWIR band image preserve the similar spectral and texture characteristics to the raw SWIR band image.

  6. North Central Thailand

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This radar image shows the dramatic landscape in the Phang Hoei Range of north central Thailand, about 40 kilometers (25 miles) northeast of the city of Lom Sak. The plateau, shown in green to the left of center, is the area of Phu Kradung National Park. This plateau is a remnant of a once larger plateau, another portion of which is seen along the right side of the image. The plateaus have been dissected by water erosion over thousands of years. Forest areas appear green on the image; agricultural areas and settlements appear as red and blue. North is toward the lower right. The area shown is 38 by 50 kilometers (24 by 31 miles) and is centered at 16.96 degrees north latitude, 101.67 degrees east longitude. Colors are assigned to different radar frequencies and polarizations as follows: red is L-band horizontally transmitted and horizontally received; green is L-band horizontally transmitted and vertically received; blue is C-band horizontally transmitted and vertically received. The image was acquired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture (SIR-C/X-SAR) imaging radar on October 3, 1994, when it flew aboard the space shuttle Endeavour. SIR-C/X-SAR is a joint mission of the U.S./German and Italian space agencies.

    Spaceborne Imaging Radar-C and X-Band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.v.(DLR), the major partner in science, operations, and data processing of X-SAR.

  7. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Nalbandon mineral district in Afghanistan: Chapter L in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Nalbandon mineral district, which has lead and zinc deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2007, 2008, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (41 for Nalbandon) and the WGS84 datum. The final image mosaics were subdivided into ten overlapping tiles or quadrants because of the large size of the target area. The ten image tiles (or quadrants) for the Nalbandon area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Nalbandon study area, two subareas were designated for detailed field investigations (that is, the Nalbandon District and Gharghananaw-Gawmazar subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.

  8. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Zarkashan mineral district in Afghanistan: Chapter G in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Zarkashan mineral district, which has copper and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Zarkashan) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the Zarkashan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Zarkashan study area, three subareas were designated for detailed field investigations (that is, the Mine Area, Bolo Gold Prospect, and Luman-Tamaki Gold Prospect subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.

  9. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Panjsher Valley mineral district in Afghanistan: Chapter M in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Panjsher Valley mineral district, which has emerald and silver-iron deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2009, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Panjsher Valley) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the Panjsher Valley area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Panjsher Valley study area, two subareas were designated for detailed field investigations (that is, the Emerald and Silver-Iron subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.

  10. An enhanced narrow-band imaging method for the microvessel detection

    NASA Astrophysics Data System (ADS)

    Yu, Feng; Song, Enmin; Liu, Hong; Wan, Youming; Zhu, Jun; Hung, Chih-Cheng

    2018-02-01

    A medical endoscope system combined with the narrow-band imaging (NBI), has been shown to be a superior diagnostic tool for early cancer detection. The NBI can reveal the morphologic changes of microvessels in the superficial cancer. In order to improve the conspicuousness of microvessel texture, we propose an enhanced NBI method to improve the conspicuousness of endoscopic images. To obtain the more conspicuous narrow-band images, we use the edge operator to extract the edge information of the narrow-band blue and green images, and give a weight to the extracted edges. Then, the weighted edges are fused with the narrow-band blue and green images. Finally, the displayed endoscopic images are reconstructed with the enhanced narrow-band images. In addition, we evaluate the performance of enhanced narrow-band images with different edge operators. Experimental results indicate that the Sobel and Canny operators achieve the best performance of all. Compared with traditional NBI method of Olympus company, our proposed method has more conspicuous texture of microvessel.

  11. Latitudinal Variations In Vertical Cloud Structure Of Jupiter As Determined By Ground- based Observation With Multispectral Imaging

    NASA Astrophysics Data System (ADS)

    Sato, T.; Kasaba, Y.; Takahashi, Y.; Murata, I.; Uno, T.; Tokimasa, N.; Sakamoto, M.

    2008-12-01

    We conducted ground-based observation of Jupiter with the liquid crystal tunable filter (LCTF) and EM-CCD camera in two methane absorption bands (700-757nm, 872-950nm at 3 nm step: total of 47 wavelengths) to derive detailed Jupiter's vertical cloud structure. The 2-meter reflector telescope at Nishi-Harima astronomical observatory in Japan was used for our observation on 26-30 May, 2008. After a series of image processing (composition of high quality images in each wavelength and geometry calibration), we converted observed intensity to absolute reflectivity at each pixel using standard star. As a result, we acquired Jupiter's data cubes with high-spatial resolution (about 1") and narrow band imaging (typically 7nm) in each methane absorption band by superimposing 30 Jupiter's images obtained in short exposure time (50 ms per one image). These data sets enable us to probe different altitudes of Jupiter from 100 mbar down to 1bar level with higher vertical resolution than using convectional interference filters. To interpret observed center-limb profiles, we developed radiative transfer code based on layer adding doubling algorithm to treat multiple scattering of solar light theoretically and extracted information on aerosol altitudes and optical properties using two-cloud model. First, we fit 5 different profiles simultaneously in continuum data (745-757 nm) to retrieve information on optical thickness of haze and single scattering albedo of cloud. Second, we fit 15 different profiles around 727nm methane absorption band and 13 different profiles around 890 nm methane absorption band to retrieve information on the aerosol altitude location and optical thickness of cloud. In this presentation, we present the results of these modeling simulations and discuss the latitudinal variations of Jupiter's vertical cloud structure.

  12. Wide field-of-view dual-band multispectral muzzle flash detection

    NASA Astrophysics Data System (ADS)

    Montoya, J.; Melchor, J.; Spiliotis, P.; Taplin, L.

    2013-06-01

    Sensor technologies are undergoing revolutionary advances, as seen in the rapid growth of multispectral methodologies. Increases in spatial, spectral, and temporal resolution, and in breadth of spectral coverage, render feasible sensors that function with unprecedented performance. A system was developed that addresses many of the key hardware requirements for a practical dual-band multispectral acquisition system, including wide field of view and spectral/temporal shift between dual bands. The system was designed using a novel dichroic beam splitter and dual band-pass filter configuration that creates two side-by-side images of a scene on a single sensor. A high-speed CMOS sensor was used to simultaneously capture data from the entire scene in both spectral bands using a short focal-length lens that provided a wide field-of-view. The beam-splitter components were arranged such that the two images were maintained in optical alignment and real-time intra-band processing could be carried out using only simple arithmetic on the image halves. An experiment related to limitations of the system to address multispectral detection requirements was performed. This characterized the system's low spectral variation across its wide field of view. This paper provides lessons learned on the general limitation of key hardware components required for multispectral muzzle flash detection, using the system as a hardware example combined with simulated multispectral muzzle flash and background signatures.

  13. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Dusar-Shaida mineral district in Afghanistan: Chapter I in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Dusar-Shaida mineral district, which has copper and tin deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’ picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’ local zone (41 for Dusar-Shaida) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Dusar-Shaida area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Dusar-Shaida study area, three subareas were designated for detailed field investigations (that is, the Dahana-Misgaran, Kaftar VMS, and Shaida subareas); these subareas were extracted from the area’ image mosaic and are provided as separate embedded geotiff images.

  14. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Kundalyan mineral district in Afghanistan: Chapter H in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kundalyan mineral district, which has porphyry copper and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Kundalyan) and the WGS84 datum. The final image mosaics were subdivided into five overlapping tiles or quadrants because of the large size of the target area. The five image tiles (or quadrants) for the Kundalyan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Kundalyan study area, three subareas were designated for detailed field investigations (that is, the Baghawan-Garangh, Charsu-Ghumbad, and Kunag Skarn subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.

  15. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Badakhshan mineral district in Afghanistan: Chapter F in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Badakhshan mineral district, which has gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2007,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Badakhshan) and the WGS84 datum. The final image mosaics were subdivided into six overlapping tiles or quadrants because of the large size of the target area. The six image tiles (or quadrants) for the Badakhshan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Badakhshan study area, three subareas were designated for detailed field investigations (that is, the Bharak, Fayz-Abad, and Ragh subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.

  16. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Aynak mineral district in Afghanistan: Chapter E in Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.

    2012-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Aynak mineral district, which has copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2008,2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Aynak) and the WGS84 datum. The final image mosaics were subdivided into four overlapping tiles or quadrants because of the large size of the target area. The four image tiles (or quadrants) for the Aynak area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Aynak study area, five subareas were designated for detailed field investigations (that is, the Bakhel-Charwaz, Kelaghey-Kakhay, Kharuti-Dawrankhel, Logar Valley, and Yagh-Darra/Gul-Darra subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.

  17. Local-area-enhanced, 2.5-meter resolution natural-color and color-infrared satellite-image mosaics of the Ghunday-Achin mineral district in Afghanistan, in Davis, P.A, compiler, Local-area-enhanced, high-resolution natural-color and color-infrared satellite-image mosaics of mineral districts in Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.; Davis, Philip A.

    2013-01-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ghunday-Achin mineral district, which has magnesite and talc deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Ghunday-Achin) and the WGS84 datum. The final image mosaics were subdivided into six overlapping tiles or quadrants because of the large size of the target area. The six image tiles (or quadrants) for the Ghunday-Achin area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Ghunday-Achin study area, two subareas were designated for detailed field investigations (that is, the Achin-Magnesite and Ghunday-Mamahel subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.

  18. STS-68 radar image: Glasgow, Missouri

    NASA Image and Video Library

    1994-10-07

    STS068-S-055 (7 October 1994) --- This is a false-color L-Band image of an area near Glasgow, Missouri, centered at about 39.2 degrees north latitude and 92.8 degrees west longitude. The image was acquired using the L-Band radar channel (horizontally transmitted and received and horizontally transmitted and vertically received) polarization's combined. The data were acquired by the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the Space Shuttle Endeavour on orbit 50 on October 3, 1994. The area shown is approximately 37 by 25 kilometers (23 by 16 miles). The radar data, coupled with pre-flood aerial photography and satellite data and post-flood topographic and field data, are being used to evaluate changes associated with levee breaks in land forms, where deposits formed during the widespread flooding in 1993 along the Missouri and Mississippi Rivers. The distinct radar scattering properties of farmland, sand fields and scoured areas will be used to inventory flood plains along the Missouri River and determine the processes by which these areas return to preflood conditions. The image shows one such levee break near Glasgow, Missouri. In the upper center of the radar image, below the bend of the river, is a region covered by several meters of sand, shown as dark regions. West (left) of the dark areas, a gap in the levee tree canopy shows the area where the levee failed. Radar data such as these can help scientists more accurately assess the potential for future flooding in this region and how that might impact surrounding communities. Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses the three microwave wavelengths: the L-Band (24 centimeters), C-Band (6 centimeters) and X-Band (3 centimeters). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory (JPL). X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.v. (DLR), the major partner in science, operations and data processing of X-SAR. (P-44734)

  19. SNPP VIIRS Spectral Bands Co-Registration and Spatial Response Characterization

    NASA Technical Reports Server (NTRS)

    Lin, Guoqing; Tilton, James C.; Wolfe, Robert E.; Tewari, Krishna P.; Nishihama, Masahiro

    2013-01-01

    The Visible Infrared Imager Radiometer Suite (VIIRS) instrument onboard the Suomi National Polar-orbiting Partnership (SNPP) satellite was launched on 28 October 2011. The VIIRS has 5 imagery spectral bands (I-bands), 16 moderate resolution spectral bands (M-bands) and a panchromatic day/night band (DNB). Performance of the VIIRS spatial response and band-to-band co-registration (BBR) was measured through intensive pre-launch tests. These measurements were made in the non-aggregated zones near the start (or end) of scan for the I-bands and M-bands and for a limited number of aggregation modes for the DNB in order to test requirement compliance. This paper presents results based on a recently re-processed pre-launch test data. Sensor (detector) spatial impulse responses in the scan direction are parameterized in terms of ground dynamic field of view (GDFOV), horizontal spatial resolution (HSR), modulation transfer function (MTF), ensquared energy (EE) and integrated out-of-pixel (IOOP) spatial response. Results are presented for the non-aggregation, 2-sample and 3-sample aggregation zones for the I-bands and M-bands, and for a limited number of aggregation modes for the DNB. On-orbit GDFOVs measured for the 5 I-bands in the scan direction using a straight bridge are also presented. Band-to-band co-registration (BBR) is quantified using the prelaunch measured band-to-band offsets. These offsets may be expressed as fractions of horizontal sampling intervals (HSIs), detector spatial response parameters GDFOV or HSR. BBR bases on HSIs in the non-aggregation, 2-sample and 3-sample aggregation zones are presented. BBR matrices based on scan direction GDFOV and HSR are compared to the BBR matrix based on HSI in the non-aggregation zone. We demonstrate that BBR based on GDFOV is a better representation of footprint overlap and so this definition should be used in BBR requirement specifications. We propose that HSR not be used as the primary image quality indicator, since we show that it is neither an adequate representation of the size of sensor spatial response nor an adequate measure of imaging quality.

  20. Unsupervised Feature Selection Based on the Morisita Index for Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    Golay, Jean; Kanevski, Mikhail

    2017-04-01

    Hyperspectral sensors are capable of acquiring images with hundreds of narrow and contiguous spectral bands. Compared with traditional multispectral imagery, the use of hyperspectral images allows better performance in discriminating between land-cover classes, but it also results in large redundancy and high computational data processing. To alleviate such issues, unsupervised feature selection techniques for redundancy minimization can be implemented. Their goal is to select the smallest subset of features (or bands) in such a way that all the information content of a data set is preserved as much as possible. The present research deals with the application to hyperspectral images of a recently introduced technique of unsupervised feature selection: the Morisita-Based filter for Redundancy Minimization (MBRM). MBRM is based on the (multipoint) Morisita index of clustering and on the Morisita estimator of Intrinsic Dimension (ID). The fundamental idea of the technique is to retain only the bands which contribute to increasing the ID of an image. In this way, redundant bands are disregarded, since they have no impact on the ID. Besides, MBRM has several advantages over benchmark techniques: in addition to its ability to deal with large data sets, it can capture highly-nonlinear dependences and its implementation is straightforward in any programming environment. Experimental results on freely available hyperspectral images show the good effectiveness of MBRM in remote sensing data processing. Comparisons with benchmark techniques are carried out and random forests are used to assess the performance of MBRM in reducing the data dimensionality without loss of relevant information. References [1] C. Traina Jr., A.J.M. Traina, L. Wu, C. Faloutsos, Fast feature selection using fractal dimension, in: Proceedings of the XV Brazilian Symposium on Databases, SBBD, pp. 158-171, 2000. [2] J. Golay, M. Kanevski, A new estimator of intrinsic dimension based on the multipoint Morisita index, Pattern Recognition 48(12), pp. 4070-4081, 2015. [3] J. Golay, M. Kanevski, Unsupervised feature selection based on the Morisita estimator of intrinsic dimension, arXiv:1608.05581, 2016.

  1. Computer processing of Mars Odyssey THEMIS IR imaging, MGS MOLA altimetry and Mars Express stereo imaging to locate Airy-0, the Mars prime meridian reference

    NASA Astrophysics Data System (ADS)

    Duxbury, Thomas; Neukum, Gerhard; Smith, David E.; Christensen, Philip; Neumann, Gregory; Albee, Arden; Caplinger, Michael; Seregina, N. V.; Kirk, Randolph L.

    The small crater Airy-0 was selected from Mariner 9 images to be the reference for the Mars prime meridian. Initial analyses were made in year 2000 to tie Viking Orbiter and Mars Orbiter Camera images of Airy-0 to the evolving Mars Orbiter Laser Altimeter global digital terrain model to improve the location accuracy of Airy-0. Based upon this tie and radiometric tracking of landers / rovers from earth, new expressions for the Mars spin axis direction, spin rate and prime meridian epoch value were produced to define the orientation of the Martian surface in inertial space over time. Now that the Mars Global Surveyor mission and the Mars Orbiter Laser Altimeter global digital terrain model are complete, a more exhaustive study has been performed to determine the location of Airy-0 relative to the global terrain grid. THEMIS IR image cubes of the Airy and Gale crater regions were tied to the global terrain grid using precision stereo photogrammetric image processing techniques. The Airy-0 location was determined to be within 50 meters of the currently defined IAU prime meridian, with this offset at the limiting absolute accuracy of the global terrain grid. Additional outputs of this study were a controlled multi-band photomosaic of Airy, precision alignment and geometric models of the ten THEMIS IR bands and a controlled multi-band photomosaic of Gale crater used to validate the Mars Surface Laboratory operational map products supporting their successful landing on Mars.

  2. Shuttle imaging radar-C science plan

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The Shuttle Imaging Radar-C (SIR-C) mission will yield new and advanced scientific studies of the Earth. SIR-C will be the first instrument to simultaneously acquire images at L-band and C-band with HH, VV, HV, or VH polarizations, as well as images of the phase difference between HH and VV polarizations. These data will be digitally encoded and recorded using onboard high-density digital tape recorders and will later be digitally processed into images using the JPL Advanced Digital SAR Processor. SIR-C geologic studies include cold-region geomorphology, fluvial geomorphology, rock weathering and erosional processes, tectonics and geologic boundaries, geobotany, and radar stereogrammetry. Hydrology investigations cover arid, humid, wetland, snow-covered, and high-latitude regions. Additionally, SIR-C will provide the data to identify and map vegetation types, interpret landscape patterns and processes, assess the biophysical properties of plant canopies, and determine the degree of radar penetration of plant canopies. In oceanography, SIR-C will provide the information necessary to: forecast ocean directional wave spectra; better understand internal wave-current interactions; study the relationship of ocean-bottom features to surface expressions and the correlation of wind signatures to radar backscatter; and detect current-system boundaries, oceanic fronts, and mesoscale eddies. And, as the first spaceborne SAR with multi-frequency, multipolarization imaging capabilities, whole new areas of glaciology will be opened for study when SIR-C is flown in a polar orbit.

  3. Estimation of forest biomass using remote sensing

    NASA Astrophysics Data System (ADS)

    Sarker, Md. Latifur Rahman

    Forest biomass estimation is essential for greenhouse gas inventories, terrestrial carbon accounting and climate change modelling studies. The availability of new SAR, (C-band RADARSAT-2 and L-band PALSAR) and optical sensors (SPOT-5 and AVNIR-2) has opened new possibilities for biomass estimation because these new SAR sensors can provide data with varying polarizations, incidence angles and fine spatial resolutions. 'Therefore, this study investigated the potential of two SAR sensors (RADARSAT-2 with C-band and PALSAR with L-band) and two optical sensors (SPOT-5 and AVNIR2) for the estimation of biomass in Hong Kong. Three common major processing steps were used for data processing, namely (i) spectral reflectance/intensity, (ii) texture measurements and (iii) polarization or band ratios of texture parameters. Simple linear and stepwise multiple regression models were developed to establish a relationship between the image parameters and the biomass of field plots. The results demonstrate the ineffectiveness of raw data. However, significant improvements in performance (r2) (RADARSAT-2=0.78; PALSAR=0.679; AVNIR-2=0.786; SPOT-5=0.854; AVNIR-2 + SPOT-5=0.911) were achieved using texture parameters of all sensors. The performances were further improved and very promising performances (r2) were obtained using the ratio of texture parameters (RADARSAT-2=0.91; PALSAR=0.823; PALSAR two-date=0.921; AVNIR-2=0.899; SPOT-5=0.916; AVNIR-2 + SPOT-5=0.939). These performances suggest four main contributions arising from this research, namely (i) biomass estimation can be significantly improved by using texture parameters, (ii) further improvements can be obtained using the ratio of texture parameters, (iii) multisensor texture parameters and their ratios have more potential than texture from a single sensor, and (iv) biomass can be accurately estimated far beyond the previously perceived saturation levels of SAR and optical data using texture parameters or the ratios of texture parameters. A further important contribution resulting from the fusion of SAR & optical images produced accuracies (r2) of 0.706 and 0.77 from the simple fusion, and the texture processing of the fused image, respectively. Although these performances were not as attractive as the performances obtained from the other four processing steps, the wavelet fusion procedure improved the saturation level of the optical (AVNIR-2) image very significantly after fusion with SAR, image. Keywords: biomass, climate change, SAR, optical, multisensors, RADARSAT-2, PALSAR, AVNIR-2, SPOT-5, texture measurement, ratio of texture parameters, wavelets, fusion, saturation

  4. Global Temperature Measurement of Supercooled Water under Icing Conditions using Two-Color Luminescent Images and Multi-Band Filter

    NASA Astrophysics Data System (ADS)

    Tanaka, Mio; Morita, Katsuaki; Kimura, Shigeo; Sakaue, Hirotaka

    2012-11-01

    Icing occurs by a collision of a supercooled-water droplet on a surface. It can be seen in any cold area. A great attention is paid in an aircraft icing. To understand the icing process on an aircraft, it is necessary to give the temperature information of the supercooled water. A conventional technique, such as a thermocouple, is not valid, because it becomes a collision surface that accumulates ice. We introduce a dual-luminescent imaging to capture a global temperature distribution of supercooled water under the icing conditions. It consists of two-color luminescent probes and a multi-band filter. One of the probes is sensitive to the temperature and the other is independent of the temperature. The latter is used to cancel the temperature-independent luminescence of a temperature-dependent image caused by an uneven illumination and a camera location. The multi-band filter only selects the luminescent peaks of the probes to enhance the temperature sensitivity of the imaging system. By applying the system, the time-resolved temperature information of a supercooled-water droplet is captured.

  5. Multi sensor satellite imagers for commercial remote sensing

    NASA Astrophysics Data System (ADS)

    Cronje, T.; Burger, H.; Du Plessis, J.; Du Toit, J. F.; Marais, L.; Strumpfer, F.

    2005-10-01

    This paper will discuss and compare recent refractive and catodioptric imager designs developed and manufactured at SunSpace for Multi Sensor Satellite Imagers with Panchromatic, Multi-spectral, Area and Hyperspectral sensors on a single Focal Plane Array (FPA). These satellite optical systems were designed with applications to monitor food supplies, crop yield and disaster monitoring in mind. The aim of these imagers is to achieve medium to high resolution (2.5m to 15m) spatial sampling, wide swaths (up to 45km) and noise equivalent reflectance (NER) values of less than 0.5%. State-of-the-art FPA designs are discussed and address the choice of detectors to achieve these performances. Special attention is given to thermal robustness and compactness, the use of folding prisms to place multiple detectors in a large FPA and a specially developed process to customize the spectral selection with the need to minimize mass, power and cost. A refractive imager with up to 6 spectral bands (6.25m GSD) and a catodioptric imager with panchromatic (2.7m GSD), multi-spectral (6 bands, 4.6m GSD), hyperspectral (400nm to 2.35μm, 200 bands, 15m GSD) sensors on the same FPA will be discussed. Both of these imagers are also equipped with real time video view finding capabilities. The electronic units could be subdivided into the Front-End Electronics and Control Electronics with analogue and digital signal processing. A dedicated Analogue Front-End is used for Correlated Double Sampling (CDS), black level correction, variable gain and up to 12-bit digitizing and high speed LVDS data link to a mass memory unit.

  6. The fusion of satellite and UAV data: simulation of high spatial resolution band

    NASA Astrophysics Data System (ADS)

    Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata

    2017-10-01

    Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.

  7. Camouflage target detection via hyperspectral imaging plus information divergence measurement

    NASA Astrophysics Data System (ADS)

    Chen, Yuheng; Chen, Xinhua; Zhou, Jiankang; Ji, Yiqun; Shen, Weimin

    2016-01-01

    Target detection is one of most important applications in remote sensing. Nowadays accurate camouflage target distinction is often resorted to spectral imaging technique due to its high-resolution spectral/spatial information acquisition ability as well as plenty of data processing methods. In this paper, hyper-spectral imaging technique together with spectral information divergence measure method is used to solve camouflage target detection problem. A self-developed visual-band hyper-spectral imaging device is adopted to collect data cubes of certain experimental scene before spectral information divergences are worked out so as to discriminate target camouflage and anomaly. Full-band information divergences are measured to evaluate target detection effect visually and quantitatively. Information divergence measurement is proved to be a low-cost and effective tool for target detection task and can be further developed to other target detection applications beyond spectral imaging technique.

  8. The vision guidance and image processing of AGV

    NASA Astrophysics Data System (ADS)

    Feng, Tongqing; Jiao, Bin

    2017-08-01

    Firstly, the principle of AGV vision guidance is introduced and the deviation and deflection angle are measured by image coordinate system. The visual guidance image processing platform is introduced. In view of the fact that the AGV guidance image contains more noise, the image has already been smoothed by a statistical sorting. By using AGV sampling way to obtain image guidance, because the image has the best and different threshold segmentation points. In view of this situation, the method of two-dimensional maximum entropy image segmentation is used to solve the problem. We extract the foreground image in the target band by calculating the contour area method and obtain the centre line with the least square fitting algorithm. With the help of image and physical coordinates, we can obtain the guidance information.

  9. Portable hyperspectral fluorescence imaging system for detection of biofilms on stainless steel surfaces

    NASA Astrophysics Data System (ADS)

    Jun, Won; Lee, Kangjin; Millner, Patricia; Sharma, Manan; Chao, Kuanglin; Kim, Moon S.

    2008-04-01

    A rapid nondestructive technology is needed to detect bacterial contamination on the surfaces of food processing equipment to reduce public health risks. A portable hyperspectral fluorescence imaging system was used to evaluate potential detection of microbial biofilm on stainless steel typically used in the manufacture of food processing equipment. Stainless steel coupons were immersed in bacterium cultures, such as E. coli, Pseudomonas pertucinogena, Erwinia chrysanthemi, and Listeria innocula. Following a 1-week exposure, biofilm formations were assessed using fluorescence imaging. In addition, the effects on biofilm formation from both tryptic soy broth (TSB) and M9 medium with casamino acids (M9C) were examined. TSB grown cells enhance biofilm production compared with M9C-grown cells. Hyperspectral fluorescence images of the biofilm samples, in response to ultraviolet-A (320 to 400 nm) excitation, were acquired from approximately 416 to 700 nm. Visual evaluation of individual images at emission peak wavelengths in the blue revealed the most contrast between biofilms and stainless steel coupons. Two-band ratios compared with the single-band images increased the contrast between the biofilm forming area and stainless steel coupon surfaces. The 444/588 nm ratio images exhibited the greatest contrast between the biofilm formations and stainless coupon surfaces.

  10. Mapping biomass for a northern forest ecosystem using multi-frequency SAR data

    NASA Technical Reports Server (NTRS)

    Ranson, K. J.; Sun, Guoqing

    1992-01-01

    Image processing methods for mapping standing biomass for a forest in Maine, using NASA/JPL airborne synthetic aperture radar (AIRSAR) polarimeter data, are presented. By examining the dependence of backscattering on standing biomass, it is determined that the ratio of HV backscattering from a longer wavelength (P- or L-band) to a shorter wavelength (C) is a good combination for mapping total biomass. This ratio enhances the correlation of the image signature to the standing biomass and compensates for a major part of the variations in backscattering attributed to radar incidence angle. The image processing methods used include image calibration, ratioing, filtering, and segmentation. The image segmentation algorithm uses both means and variances of the image, and it is combined with the image filtering process. Preliminary assessment of the resultant biomass maps suggests that this is a promising method.

  11. Improved target detection by IR dual-band image fusion

    NASA Astrophysics Data System (ADS)

    Adomeit, U.; Ebert, R.

    2009-09-01

    Dual-band thermal imagers acquire information simultaneously in both the 8-12 μm (long-wave infrared, LWIR) and the 3-5 μm (mid-wave infrared, MWIR) spectral range. Compared to single-band thermal imagers they are expected to have several advantages in military applications. These advantages include the opportunity to use the best band for given atmospheric conditions (e. g. cold climate: LWIR, hot and humid climate: MWIR), the potential to better detect camouflaged targets and an improved discrimination between targets and decoys. Most of these advantages have not yet been verified and/or quantified. It is expected that image fusion allows better exploitation of the information content available with dual-band imagers especially with respect to detection of targets. We have developed a method for dual-band image fusion based on the apparent temperature differences in the two bands. This method showed promising results in laboratory tests. In order to evaluate its performance under operational conditions we conducted a field trial in an area with high thermal clutter. In such areas, targets are hardly to detect in single-band images because they vanish in the clutter structure. The image data collected in this field trial was used for a perception experiment. This perception experiment showed an enhanced target detection range and reduced false alarm rate for the fused images compared to the single-band images.

  12. Acousto-optic RF signal acquisition system

    NASA Astrophysics Data System (ADS)

    Bloxham, Laurence H.

    1990-09-01

    This paper describes the architecture and performance of a prototype Acousto-Optic RF Signal Acquisition System designed to intercept, automatically identify, and track communication signals in the VHF band. The system covers 28.0 to 92.0 MHz with five manually selectable, dual conversion; 12.8 MHZ bandwidth front ends. An acousto-optic spectrum analyzer (AOSA) implemented using a tellurium dioxide (Te02) Bragg cell is used to channelize the 12.8 MHz pass band into 512 25 KHz channels. Polarization switching is used to suppress optical noise. Excellent isolation and dynamic range are achieved by using a linear array of 512 custom 40/50 micron fiber optic cables to collect the light at the focal plane of the AOSA and route the light to individual photodetectors. The photodetectors are operated in the photovoltaic mode to compress the greater than 60 dB input optical dynamic range into an easily processed electrical signal. The 512 signals are multiplexed and processed as a line in a video image by a customized digital image processing system. The image processor simultaneously analyzes the channelized signal data and produces a classical waterfall display.

  13. On the bandwidth of the plenoptic function.

    PubMed

    Do, Minh N; Marchand-Maillet, Davy; Vetterli, Martin

    2012-02-01

    The plenoptic function (POF) provides a powerful conceptual tool for describing a number of problems in image/video processing, vision, and graphics. For example, image-based rendering is shown as sampling and interpolation of the POF. In such applications, it is important to characterize the bandwidth of the POF. We study a simple but representative model of the scene where band-limited signals (e.g., texture images) are "painted" on smooth surfaces (e.g., of objects or walls). We show that, in general, the POF is not band limited unless the surfaces are flat. We then derive simple rules to estimate the essential bandwidth of the POF for this model. Our analysis reveals that, in addition to the maximum and minimum depths and the maximum frequency of painted signals, the bandwidth of the POF also depends on the maximum surface slope. With a unifying formalism based on multidimensional signal processing, we can verify several key results in POF processing, such as induced filtering in space and depth-corrected interpolation, and quantify the necessary sampling rates. © 2011 IEEE

  14. Initial Lithologic Mapping Results Using Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Data

    NASA Astrophysics Data System (ADS)

    Rowan, L. C.; Mars, J. C.

    2001-05-01

    Initial analysis of ASTER data of selected areas in the Western United States shows that many important lithologic units can be mapped on the basis of spectral reflectance and spectral emittance. ASTER's most important attributes are 9 bands which record reflected-solar energy with 15 meter- and 30 meter-resolution; 5 bands of emitted energy at 90 meter- resolution; 15 meter-resolution stereoscopic images; and repetitive coverage. Particularly useful 'on-demand' ASTER data products include surface reflectance and surface emissivity images, and digital elevation models (DEM). In the solar-reflected wavelength region (0.4 to 2.5 micrometers), clays, carbonates, hydrous sulphate, and iron-oxide minerals exhibit diagnostic absorption features, whereas the emitted wavelength region (8 to 14 micrometers) provides critical information about anhydrous rock-forming minerals, such as quartz and feldspars, which lack diagnostic absorption features in the solar-reflected region. The Mountain Pass, Calf., Goldfield, Nev., and Virginia Range, Nev. study areas comprise a wide range of lithologic types for evaluating ASTER data. Calibration of the 3 bands recorded in the 0.52 to 0.86 micrometer wavelength region and the 6 bands in the 1.60 to 2.43 micrometer region was improved beyond the 'on-demand' surface reflectance standard product by using in situ spectral reflectance measurements of homogeneous field sites. Validation of this calibration was based on comparisons with spectra from calibrated AVIRIS data, and with additional field measurements. Lithologic mapping based on ASTER bands 1-9 was conducted by using endmember spectra from the image as reference spectra in matched-filter processing. The results were thresholded to display the pixels with the best match for each endmember. The results in these study areas show that Muscovite Group minerals (muscovite, illite, kaolinite) can be mapped over broad reasonably well exposed areas, and that the most intense absorption features occur in hydrothermally altered rocks. In the Mountain Pass area a few exposures containing Fe-muscovite were distinguished from the more extensive Al-mucovite-bearing rocks and soils. Advanced-argillic alteration minerals (alunite, dickite) were detected in the Goldfield mining district and in the Virginia Range. Carbonate Group minerals (calcite, dolomite) were mapped in extensive exposures in the thrust belt of the Mountain Pass area, and well exposed dolomite was distinguished from limestone in several areas. Although skarn deposits consist mainly of calcite and dolomite, their spectral shape in ASTER bands 1-9 is significantly different than typical limestone and dolomite spectra because of the presence of epidote, garnet and chrysotile in the skarn deposits. Mg-OH-bearing minerals (chlorite, biotite, hornblende) proved to be more difficult to map, although generally they were not confused with minerals of the Carbonate Group. Ferric-iron Group minerals were mapped by using a band2/band1 ratio image. Analysis of the surface emissivity standard image products relied on identification of endmember-image spectra by using the pixel-purity index procedure in the ENVI software package, and matched-filter processing. Silica-rich rocks and silica-poor rocks were recognized readily in decorrelation-stretch images, as well as matched-filter endmember images, and 2 intermediate categories were distinguished in most areas.

  15. Photographic techniques for enhancing ERTS MSS data for geologic information

    NASA Technical Reports Server (NTRS)

    Yost, E.; Geluso, W.; Anderson, R.

    1974-01-01

    Satellite multispectral black-and-white photographic negatives of Luna County, New Mexico, obtained by ERTS on 15 August and 2 September 1973, were precisely reprocessed into positive images and analyzed in an additive color viewer. In addition, an isoluminous (uniform brightness) color rendition of the image was constructed. The isoluminous technique emphasizes subtle differences between multispectral bands by greatly enhancing the color of the superimposed composite of all bands and eliminating the effects of brightness caused by sloping terrain. Basaltic lava flows were more accurately displayed in the precision processed multispectral additive color ERTS renditions than on existing state geological maps. Malpais lava flows and small basaltic occurrences not appearing on existing geological maps were identified in ERTS multispectral color images.

  16. Radar data processing and analysis

    NASA Technical Reports Server (NTRS)

    Ausherman, D.; Larson, R.; Liskow, C.

    1976-01-01

    Digitized four-channel radar images corresponding to particular areas from the Phoenix and Huntington test sites were generated in conjunction with prior experiments performed to collect X- and L-band synthetic aperture radar imagery of these two areas. The methods for generating this imagery are documented. A secondary objective was the investigation of digital processing techniques for extraction of information from the multiband radar image data. Following the digitization, the remaining resources permitted a preliminary machine analysis to be performed on portions of the radar image data. The results, although necessarily limited, are reported.

  17. Feature Transformation Detection Method with Best Spectral Band Selection Process for Hyper-spectral Imaging

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Wen; McGurr, Mike; Brickhouse, Mark

    2015-11-01

    We present a newly developed feature transformation (FT) detection method for hyper-spectral imagery (HSI) sensors. In essence, the FT method, by transforming the original features (spectral bands) to a different feature domain, may considerably increase the statistical separation between the target and background probability density functions, and thus may significantly improve the target detection and identification performance, as evidenced by the test results in this paper. We show that by differentiating the original spectral, one can completely separate targets from the background using a single spectral band, leading to perfect detection results. In addition, we have proposed an automated best spectral band selection process with a double-threshold scheme that can rank the available spectral bands from the best to the worst for target detection. Finally, we have also proposed an automated cross-spectrum fusion process to further improve the detection performance in lower spectral range (<1000 nm) by selecting the best spectral band pair with multivariate analysis. Promising detection performance has been achieved using a small background material signature library for concept-proving, and has then been further evaluated and verified using a real background HSI scene collected by a HYDICE sensor.

  18. Data Quality Evaluation and Application Potential Analysis of TIANGONG-2 Wide-Band Imaging Spectrometer

    NASA Astrophysics Data System (ADS)

    Qin, B.; Li, L.; Li, S.

    2018-04-01

    Tiangong-2 is the first space laboratory in China, which launched in September 15, 2016. Wide-band Imaging Spectrometer is a medium resolution multispectral imager on Tiangong-2. In this paper, the authors introduced the indexes and parameters of Wideband Imaging Spectrometer, and made an objective evaluation about the data quality of Wide-band Imaging Spectrometer in radiation quality, image sharpness and information content, and compared the data quality evaluation results with that of Landsat-8. Although the data quality of Wide-band Imager Spectrometer has a certain disparity with Landsat-8 OLI data in terms of signal to noise ratio, clarity and entropy. Compared with OLI, Wide-band Imager Spectrometer has more bands, narrower bandwidth and wider swath, which make it a useful remote sensing data source in classification and identification of large and medium scale ground objects. In the future, Wide-band Imaging Spectrometer data will be widely applied in land cover classification, ecological environment assessment, marine and coastal zone monitoring, crop identification and classification, and other related areas.

  19. Image-plane processing of visual information

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.

  20. Grid-Independent Compressive Imaging and Fourier Phase Retrieval

    ERIC Educational Resources Information Center

    Liao, Wenjing

    2013-01-01

    This dissertation is composed of two parts. In the first part techniques of band exclusion(BE) and local optimization(LO) are proposed to solve linear continuum inverse problems independently of the grid spacing. The second part is devoted to the Fourier phase retrieval problem. Many situations in optics, medical imaging and signal processing call…

  1. Language-motor interference reflected in MEG beta oscillations.

    PubMed

    Klepp, Anne; Niccolai, Valentina; Buccino, Giovanni; Schnitzler, Alfons; Biermann-Ruben, Katja

    2015-04-01

    The involvement of the brain's motor system in action-related language processing can lead to overt interference with simultaneous action execution. The aim of the current study was to find evidence for this behavioural interference effect and to investigate its neurophysiological correlates using oscillatory MEG analysis. Subjects performed a semantic decision task on single action verbs, describing actions executed with the hands or the feet, and abstract verbs. Right hand button press responses were given for concrete verbs only. Therefore, longer response latencies for hand compared to foot verbs should reflect interference. We found interference effects to depend on verb imageability: overall response latencies for hand verbs did not differ significantly from foot verbs. However, imageability interacted with effector: while response latencies to hand and foot verbs with low imageability were equally fast, those for highly imageable hand verbs were longer than for highly imageable foot verbs. The difference is reflected in motor-related MEG beta band power suppression, which was weaker for highly imageable hand verbs compared with highly imageable foot verbs. This provides a putative neuronal mechanism for language-motor interference where the involvement of cortical hand motor areas in hand verb processing interacts with the typical beta suppression seen before movements. We found that the facilitatory effect of higher imageability on action verb processing time is perturbed when verb and motor response relate to the same body part. Importantly, this effect is accompanied by neurophysiological effects in beta band oscillations. The attenuated power suppression around the time of movement, reflecting decreased cortical excitability, seems to result from motor simulation during action-related language processing. This is in line with embodied cognition theories. Copyright © 2015. Published by Elsevier Inc.

  2. Hi-fidelity multi-scale local processing for visually optimized far-infrared Herschel images

    NASA Astrophysics Data System (ADS)

    Li Causi, G.; Schisano, E.; Liu, S. J.; Molinari, S.; Di Giorgio, A.

    2016-07-01

    In the context of the "Hi-Gal" multi-band full-plane mapping program for the Galactic Plane, as imaged by the Herschel far-infrared satellite, we have developed a semi-automatic tool which produces high definition, high quality color maps optimized for visual perception of extended features, like bubbles and filaments, against the high background variations. We project the map tiles of three selected bands onto a 3-channel panorama, which spans the central 130 degrees of galactic longitude times 2.8 degrees of galactic latitude, at the pixel scale of 3.2", in cartesian galactic coordinates. Then we process this image piecewise, applying a custom multi-scale local stretching algorithm, enforced by a local multi-scale color balance. Finally, we apply an edge-preserving contrast enhancement to perform an artifact-free details sharpening. Thanks to this tool, we have thus produced a stunning giga-pixel color image of the far-infrared Galactic Plane that we made publicly available with the recent release of the Hi-Gal mosaics and compact source catalog.

  3. Space Radar Image of Long Valley, California - 3-D view

    NASA Image and Video Library

    1999-05-01

    This is a three-dimensional perspective view of Long Valley, California by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This view was constructed by overlaying a color composite SIR-C image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle and, which then, are compared to obtain elevation information. The data were acquired on April 13, 1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR radar instrument. The color composite radar image was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is off the image to the left. http://photojournal.jpl.nasa.gov/catalog/PIA01757

  4. Use of discrete chromatic space to tune the image tone in a color image mosaic

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Zheng, Li

    2003-09-01

    Color image process is a very important problem. However, the main approach presently of them is to transfer RGB colour space into another colour space, such as HIS (Hue, Intensity and Saturation). YIQ, LUV and so on. Virutally, it may not be a valid way to process colour airborne image just in one colour space. Because the electromagnetic wave is physically altered in every wave band, while the color image is perceived based on psychology vision. Therefore, it's necessary to propose an approach accord with physical transformation and psychological perception. Then, an analysis on how to use relative colour spaces to process colour airborne photo is discussed and an application on how to tune the image tone in colour airborne image mosaic is introduced. As a practice, a complete approach to perform the mosaic on color airborne images via taking full advantage of relative color spaces is discussed in the application.

  5. Space Radar Image of Kliuchevskoi Volcano, Russia

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is an image of the Kliuchevskoi volcano, Kamchatka, Russia, which began to erupt on September 30, 1994. Kliuchevskoi is the bright white peak surrounded by red slopes in the lower left portion of the image. The image was acquired by the Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar aboard the space shuttle Endeavour on its 25th orbit on October 1, 1994. The image shows an area approximately 30 kilometers by 60 kilometers (18.5 miles by 37 miles) that is centered at 56.18 degrees north latitude and 160.78 degrees east longitude. North is toward the top of the image. The Kamchatka volcanoes are among the most active volcanoes in the world. The volcanic zone sits above a tectonic plate boundary, where the Pacific plate is sinking beneath the northeast edge of the Eurasian plate. The Endeavour crew obtained dramatic video and photographic images of this region during the eruption, which will assist scientists in analyzing the dynamics of the current activity. The colors in this image were obtained using the following radar channels: red represents the L-band (horizontally transmitted and received); green represents the L-band (horizontally transmitted and vertically received); blue represents the C-band (horizontally transmitted and vertically received). The Kamchatka River runs from left to right across the image. An older, dormant volcanic region appears in green on the north side of the river. The current eruption included massive ejections of gas, vapor and ash, which reached altitudes of 20,000 meters (65,000 feet). New lava flows are visible on the flanks of Kliuchevskoi, appearing yellow/green in the image, superimposed on the red surfaces in the lower center. Melting snow triggered mudflows on the north flank of the volcano, which may threaten agricultural zones and other settlements in the valley to the north. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrte.v. (DLR), the major partner in science, operations and data processing of X-SAR.

  6. Space Radar Image of West Texas - SAR scan

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This radar image of the Midland/Odessa region of West Texas, demonstrates an experimental technique, called ScanSAR, that allows scientists to rapidly image large areas of the Earth's surface. The large image covers an area 245 kilometers by 225 kilometers (152 miles by 139 miles). It was obtained by the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying aboard the space shuttle Endeavour on October 5, 1994. The smaller inset image is a standard SIR-C image showing a portion of the same area, 100 kilometers by 57 kilometers (62 miles by 35 miles) and was taken during the first flight of SIR-C on April 14, 1994. The bright spots on the right side of the image are the cities of Odessa (left) and Midland (right), Texas. The Pecos River runs from the top center to the bottom center of the image. Along the left side of the image are, from top to bottom, parts of the Guadalupe, Davis and Santiago Mountains. North is toward the upper right. Unlike conventional radar imaging, in which a radar continuously illuminates a single ground swath as the space shuttle passes over the terrain, a Scansar radar illuminates several adjacent ground swaths almost simultaneously, by 'scanning' the radar beam across a large area in a rapid sequence. The adjacent swaths, typically about 50 km (31 miles) wide, are then merged during ground processing to produce a single large scene. Illumination for this L-band scene is from the top of the image. The beams were scanned from the top of the scene to the bottom, as the shuttle flew from left to right. This scene was acquired in about 30 seconds. A normal SIR-C image is acquired in about 13 seconds. The ScanSAR mode will likely be used on future radar sensors to construct regional and possibly global radar images and topographic maps. The ScanSAR processor is being designed for 1996 implementation at NASA's Alaska SAR Facility, located at the University of Alaska Fairbanks, and will produce digital images from the forthcoming Canadian RADARSAT satellite. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.v.(DLR), the major partner in science, operations, and data processing of X-SAR.

  7. On-Orbit Line Spread Function Estimation of the SNPP VIIRS Imaging System from Lake Pontchartrain Causeway Bridge Images

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Wolfe, Robert E.; Lin, Guoqing

    2017-01-01

    The visible infrared imaging radiometer suite (VIIRS) instrument was launched 28 October 2011 onboard the Suomi National Polar-orbiting Partnership (SNPP) satellite. The VIIRS instrument is a whiskbroom system with 22 spectral and thermal bands split between 16 moderate resolution bands (M-bands), five imagery resolution bands (I-bands) and a day-night band. In this study we estimate the along-scan line spread function (LSF) of the I-bands and M-bands based on measurements performed on images of the Lake Pontchartrain Causeway Bridge. In doing so we develop a model for the LSF that closely matches the prelaunch laboratory measurements. We utilize VIIRS images co-geolocated with a Landsat TM image to precisely locate the bridge linear feature in the VIIRS images as a linear best fit to a straight line. We then utilize non-linear optimization to compute the best fit equation of the VIIRS image measurements in the vicinity of the bridge to the developed model equation. From the found parameterization of the model equation we derive the full-width at half-maximum (FWHM) as an approximation of the sensor field of view (FOV) for all bands, and compare these on-orbit measured values with prelaunch laboratory results.

  8. Bands selection and classification of hyperspectral images based on hybrid kernels SVM by evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Yan-Yan; Li, Dong-Sheng

    2016-01-01

    The hyperspectral images(HSI) consist of many closely spaced bands carrying the most object information. While due to its high dimensionality and high volume nature, it is hard to get satisfactory classification performance. In order to reduce HSI data dimensionality preparation for high classification accuracy, it is proposed to combine a band selection method of artificial immune systems (AIS) with a hybrid kernels support vector machine (SVM-HK) algorithm. In fact, after comparing different kernels for hyperspectral analysis, the approach mixed radial basis function kernel (RBF-K) with sigmoid kernel (Sig-K) and applied the optimized hybrid kernels in SVM classifiers. Then the SVM-HK algorithm used to induce the bands selection of an improved version of AIS. The AIS was composed of clonal selection and elite antibody mutation, including evaluation process with optional index factor (OIF). Experimental classification performance was on a San Diego Naval Base acquired by AVIRIS, the HRS dataset shows that the method is able to efficiently achieve bands redundancy removal while outperforming the traditional SVM classifier.

  9. Daylight coloring for monochrome infrared imagery

    NASA Astrophysics Data System (ADS)

    Gabura, James

    2015-05-01

    The effectiveness of infrared imagery in poor visibility situations is well established and the range of applications is expanding as we enter a new era of inexpensive thermal imagers for mobile phones. However there is a problem in that the counterintuitive reflectance characteristics of various common scene elements can cause slowed reaction times and impaired situational awareness-consequences that can be especially detrimental in emergency situations. While multiband infrared sensors can be used, they are inherently more costly. Here we propose a technique for adding a daylight color appearance to single band infrared images, using the normally overlooked property of local image texture. The simple method described here is illustrated with colorized images from the visible red and long wave infrared bands. Our colorizing process not only imparts a natural daylight appearance to infrared images but also enhances the contrast and visibility of otherwise obscure detail. We anticipate that this colorizing method will lead to a better user experience, faster reaction times and improved situational awareness for a growing community of infrared camera users. A natural extension of our process could expand upon its texture discerning feature by adding specialized filters for discriminating specific targets.

  10. Sub-band denoising and spline curve fitting method for hemodynamic measurement in perfusion MRI

    NASA Astrophysics Data System (ADS)

    Lin, Hong-Dun; Huang, Hsiao-Ling; Hsu, Yuan-Yu; Chen, Chi-Chen; Chen, Ing-Yi; Wu, Liang-Chi; Liu, Ren-Shyan; Lin, Kang-Ping

    2003-05-01

    In clinical research, non-invasive MR perfusion imaging is capable of investigating brain perfusion phenomenon via various hemodynamic measurements, such as cerebral blood volume (CBV), cerebral blood flow (CBF), and mean trasnit time (MTT). These hemodynamic parameters are useful in diagnosing brain disorders such as stroke, infarction and periinfarct ischemia by further semi-quantitative analysis. However, the accuracy of quantitative analysis is usually affected by poor signal-to-noise ratio image quality. In this paper, we propose a hemodynamic measurement method based upon sub-band denoising and spline curve fitting processes to improve image quality for better hemodynamic quantitative analysis results. Ten sets of perfusion MRI data and corresponding PET images were used to validate the performance. For quantitative comparison, we evaluate gray/white matter CBF ratio. As a result, the hemodynamic semi-quantitative analysis result of mean gray to white matter CBF ratio is 2.10 +/- 0.34. The evaluated ratio of brain tissues in perfusion MRI is comparable to PET technique is less than 1-% difference in average. Furthermore, the method features excellent noise reduction and boundary preserving in image processing, and short hemodynamic measurement time.

  11. Accurate band-to-band registration of AOTF imaging spectrometer using motion detection technology

    NASA Astrophysics Data System (ADS)

    Zhou, Pengwei; Zhao, Huijie; Jin, Shangzhong; Li, Ningchuan

    2016-05-01

    This paper concerns the problem of platform vibration induced band-to-band misregistration with acousto-optic imaging spectrometer in spaceborne application. Registrating images of different bands formed at different time or different position is difficult, especially for hyperspectral images form acousto-optic tunable filter (AOTF) imaging spectrometer. In this study, a motion detection method is presented using the polychromatic undiffracted beam of AOTF. The factors affecting motion detect accuracy are analyzed theoretically, and calculations show that optical distortion is an easily overlooked factor to achieve accurate band-to-band registration. Hence, a reflective dual-path optical system has been proposed for the first time, with reduction of distortion and chromatic aberration, indicating the potential of higher registration accuracy. Consequently, a spectra restoration experiment using additional motion detect channel is presented for the first time, which shows the accurate spectral image registration capability of this technique.

  12. Deformation band clusters on Mars and implications for subsurface fluid flow

    USGS Publications Warehouse

    Okubo, C.H.; Schultz, R.A.; Chan, M.A.; Komatsu, G.

    2009-01-01

    High-resolution imagery reveals unprecedented lines of evidence for the presence of deformation band clusters in layered sedimentary deposits in the equatorial region of Mars. Deformation bands are a class of geologic structural discontinuity that is a precursor to faults in clastic rocks and soils. Clusters of deformation bands, consisting of many hundreds of individual subparallel bands, can act as important structural controls on subsurface fluid flow in terrestrial reservoirs, and evidence of diagenetic processes is often preserved along them. Deformation band clusters are identified on Mars based on characteristic meter-scale architectures and geologic context as observed in data from the High-Resolution Imaging Science Experiment (HiRISE) camera. The identification of deformation band clusters on Mars is a key to investigating the migration of fluids between surface and subsurface reservoirs in the planet's vast sedimentary deposits. Similar to terrestrial examples, evidence of diagenesis in the form of light- and dark-toned discoloration and wall-rock induration is recorded along many of the deformation band clusters on Mars. Therefore, these structures are important sites for future exploration and investigations into the geologic history of water and water-related processes on Mars. ?? 2008 Geological Society of America.

  13. LUGOL'S IODINE CHROMOENDOSCOPY VERSUS NARROW BAND IMAGE ENHANCED ENDOSCOPY FOR THE DETECTION OF ESOPHAGEAL CANCER IN PATIENTS WITH STENOSIS SECONDARY TO CAUSTIC/CORROSIVE AGENT INGESTION.

    PubMed

    Pennachi, Caterina Maria Pia Simoni; Moura, Diogo Turiani Hourneaux de; Amorim, Renato Bastos Pimenta; Guedes, Hugo Gonçalo; Kumbhari, Vivek; Moura, Eduardo Guimarães Hourneaux de

    2017-01-01

    The diagnosis of corrosion cancer should be suspected in patients with corrosive ingestion if after a latent period of negligible symptoms there is development of dysphagia, or poor response to dilatation, or if respiratory symptoms develop in an otherwise stable patient of esophageal stenosis. Narrow Band Imaging detects superficial squamous cell carcinoma more frequently than white-light imaging, and has significantly higher sensitivity and accuracy compared with white-light. To determinate the clinical applicability of Narrow Band Imaging versus Lugol´s solution chromendoscopy for detection of early esophageal cancer in patients with caustic/corrosive agent stenosis. Thirty-eight patients, aged between 28-84 were enrolled and examined by both Narrow Band Imaging and Lugol´s solution chromendoscopy. A 4.9mm diameter endoscope was used facilitating examination of a stenotic area without dilation. Narrow Band Imaging was performed and any lesion detected was marked for later biopsy. Then, Lugol´s solution chromoendoscopy was performed and biopsies were taken at suspicious areas. Patients who had abnormal findings at the routine, Narrow Band Imaging or Lugol´s solution chromoscopy exam had their stenotic ring biopsied. We detected nine suspicious lesions with Narrow Band Imaging and 14 with Lugol´s solution chromendoscopy. The sensitivity and specificity of the Narrow Band Imaging was 100% and 80.6%, and with Lugol´s chromoscopy 100% and 66.67%, respectively. Five (13%) suspicious lesions were detected both with Narrow Band Imaging and Lugol's chromoscopy, two (40%) of these lesions were confirmed carcinoma on histopathological examination. Narrow Band Imaging is an applicable option to detect and evaluate cancer in patients with caustic /corrosive stenosis compared to the Lugol´s solution chromoscopy.

  14. Application of narrow-band television to industrial and commercial communications

    NASA Technical Reports Server (NTRS)

    Embrey, B. C., Jr.; Southworth, G. R.

    1974-01-01

    The development of narrow-band systems for use in space systems is presented. Applications of the technology to future spacecraft requirements are discussed along with narrow-band television's influence in stimulating development within the industry. The transferral of the technology into industrial and commercial communications is described. Major areas included are: (1) medicine; (2) education; (3) remote sensing for traffic control; and (5) weather observation. Applications in data processing, image enhancement, and information retrieval are provided by the combination of the TV camera and the computer.

  15. Mapping hydrothermally altered rocks at Cuprite, Nevada, using the advanced spaceborne thermal emission and reflection radiometer (Aster), a new satellite-imaging system

    USGS Publications Warehouse

    Rowan, L.C.; Hook, S.J.; Abrams, M.J.; Mars, J.C.

    2003-01-01

    The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is a 14-band multispectral instrument on board the Earth Observing System (EOS), TERRA. The three bands between 0.52 and 0.86 ??m and the six bands from 1.60 and 2.43 ??m, which have 15- and 30-m spatial resolution, respectively, were selected primarily for making remote mineralogical determinations. The Cuprite, Nevada, mining district comprises two hydrothermal alteration centers where Tertiary volcanic rocks have been hydrothermally altered mainly to bleached silicified rocks and opalized rocks, with a marginal zone of limonitic argilized rocks. Country rocks are mainly Cambrian phyllitic siltstone and limestone. Evaluation of an ASTER image of the Cuprite district shows that spectral reflectance differences in the nine bands in the 0.52 to 2.43 ??m region provide a basis for identifying and mapping mineralogical components which characterize the main hydrothermal alteration zones: opal is the spectrally dominant mineral in the silicified zone; whereas, alunite and kaolinite are dominant in the opalized zone. In addition, the distribution of unaltered country rocks was mapped because of the presence of spectrally dominant muscovite in the siltstone and calcite in limestone, and the tuffaceous rocks and playa deposits were distinguishable due to their relatively flat spectra and weak absorption features at 2.33 and 2.20 ??m, respectively. An Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) image of the study area was processed using a similar methodology used with the ASTER data. Comparison of the ASTER and AVIRIS results shows that the results are generally similar, but the higher spectral resolution of AVIRIS (224 bands) permits identification of more individual minerals, including certain polymorphs. However, ASTER has recorded images of more than 90 percent of the Earth's land surface with less than 20 percent cloud cover, and these data are available at nominal or no cost. Landsat TM images have a similar spatial resolution to ASTER images, but TM has fewer bands, which limits its usefulness for making mineral determinations.

  16. Lithological discrimination of accretionary complex (Sivas, northern Turkey) using novel hybrid color composites and field data

    NASA Astrophysics Data System (ADS)

    Özkan, Mutlu; Çelik, Ömer Faruk; Özyavaş, Aziz

    2018-02-01

    One of the most appropriate approaches to better understand and interpret geologic evolution of an accretionary complex is to make a detailed geologic map. The fact that ophiolite sequences consist of various rock types may require a unique image processing method to map each ophiolite body. The accretionary complex in the study area is composed mainly of ophiolitic and metamorphic rocks along with epi-ophiolitic sedimentary rocks. This paper attempts to map the Late Cretaceous accretionary complex in detail in northern Sivas (within İzmir-Ankara-Erzincan Suture Zone in Turkey) by the analysis of all of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) bands and field study. The new two hybrid color composite images yield satisfactory results in delineating peridotite, gabbro, basalt, and epi-ophiolitic sedimentary rocks of the accretionary complex in the study area. While the first hybrid color composite image consists of one principle component (PC) and two band ratios (PC1, 3/4, 4/6 in the RGB), the PC5, the original ASTER band 4 and the 3/4 band ratio images were assigned to the RGB colors to generate the second hybrid color composite image. In addition to that, the spectral indices derived from the ASTER thermal infrared (TIR) bands discriminate clearly ultramafic, siliceous, and carbonate rocks from adjacent lithologies at a regional scale. Peridotites with varying degrees of serpentinization illustrated as a single color were best identified in the spectral indices map. Furthermore, the boundaries of ophiolitic rocks based on fieldwork were outlined in detail in some parts of the study area by superimposing the resultant maps of ASTER maps on Google Earth images of finer spatial resolution. Eventually, the encouraging geologic map generated by the image analysis of ASTER data strongly correlates with lithological boundaries from a field survey.

  17. Breaking the limits of structural and mechanical imaging of the heterogeneous structure of coal macerals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, L.; Tselev, A.; Jesse, S.

    The correlation between local mechanical (elasto-plastic) and structural (composition) properties of coal presents significant fundamental and practical interest for coal processing and the development of rheological models of coal to coke transformations and for advancing novel approaches. Here, we explore the relationship between the local structural, chemical composition and mechanical properties of coal using a combination of confocal micro-Raman imaging and band excitation atomic force acoustic microscopy (BE-AFAM) for a bituminous coal. This allows high resolution imaging (10s of nm) of mechanical properties of the heterogeneous (banded) architecture of coal and correlating them to the optical gap, average crystallite size,more » the bond-bending disorder of sp2 aromatic double bonds and the defect density. This methodology hence allows the structural and mechanical properties of coal components (lithotypes, microlithotypes, and macerals) to be understood, and related to local chemical structure, potentially allowing for knowledge-based modelling and optimization of coal utilization processes.« less

  18. Multispectral image analysis for object recognition and classification

    NASA Astrophysics Data System (ADS)

    Viau, C. R.; Payeur, P.; Cretu, A.-M.

    2016-05-01

    Computer and machine vision applications are used in numerous fields to analyze static and dynamic imagery in order to assist or automate decision-making processes. Advancements in sensor technologies now make it possible to capture and visualize imagery at various wavelengths (or bands) of the electromagnetic spectrum. Multispectral imaging has countless applications in various fields including (but not limited to) security, defense, space, medical, manufacturing and archeology. The development of advanced algorithms to process and extract salient information from the imagery is a critical component of the overall system performance. The fundamental objective of this research project was to investigate the benefits of combining imagery from the visual and thermal bands of the electromagnetic spectrum to improve the recognition rates and accuracy of commonly found objects in an office setting. A multispectral dataset (visual and thermal) was captured and features from the visual and thermal images were extracted and used to train support vector machine (SVM) classifiers. The SVM's class prediction ability was evaluated separately on the visual, thermal and multispectral testing datasets.

  19. Landsat-8 TIRS thermal radiometric calibration status

    USGS Publications Warehouse

    Barsi, Julia A.; Markham, Brian L.; Montanaro, Matthew; Gerace, Aaron; Hook, Simon; Schott, John R.; Raqueno, Nina G.; Morfitt, Ron

    2017-01-01

    The Thermal Infrared Sensor (TIRS) instrument is the thermal-band imager on the Landsat-8 platform. The initial onorbit calibration estimates of the two TIRS spectral bands indicated large average radiometric calibration errors, -0.29 and -0.51 W/m2 sr μm or -2.1K and -4.4K at 300K in Bands 10 and 11, respectively, as well as high variability in the errors, 0.87K and 1.67K (1-σ), respectively. The average error was corrected in operational processing in January 2014, though, this adjustment did not improve the variability. The source of the variability was determined to be stray light from far outside the field of view of the telescope. An algorithm for modeling the stray light effect was developed and implemented in the Landsat-8 processing system in February 2017. The new process has improved the overall calibration of the two TIRS bands, reducing the residual variability in the calibration from 0.87K to 0.51K at 300K for Band 10 and from 1.67K to 0.84K at 300K for Band 11. There are residual average lifetime bias errors in each band: 0.04 W/m2 sr μm (0.30K) and -0.04 W/m2 sr μm (-0.29K), for Bands 10 and 11, respectively.

  20. Distribution of hydrothermally altered rocks in the Reko Diq, Pakistan mineralized area based on spectral analysis of ASTER data

    USGS Publications Warehouse

    Rowan, L.C.; Schmidt, R.G.; Mars, J.C.

    2006-01-01

    The Reko Diq, Pakistan mineralized study area, approximately 10??km in diameter, is underlain by a central zone of hydrothermally altered rocks associated with Cu-Au mineralization. The surrounding country rocks are a variable mixture of unaltered volcanic rocks, fluvial deposits, and eolian quartz sand. Analysis of 15-band Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data of the study area, aided by laboratory spectral reflectance and spectral emittance measurements of field samples, shows that phyllically altered rocks are laterally extensive, and contain localized areas of argillically altered rocks. In the visible through shortwave-infrared (VNIR + SWIR) phyllically altered rocks are characterized by Al-OH absorption in ASTER band 6 because of molecular vibrations in muscovite, whereas argillically altered rocks have an absorption feature in band 5 resulting from alunite. Propylitically altered rocks form a peripheral zone and are present in scattered exposures within the main altered area. Chlorite and muscovite cause distinctive absorption features at 2.33 and 2.20????m, respectively, although less intense 2.33????m absorption is also present in image spectra of country rocks. Important complementary lithologic information was derived by analysis of the spectral emittance data in the 5 thermal-infrared (TIR) bands. Silicified rocks were not distinguished in the 9 VNIR + SWIR bands because of the lack of diagnostic spectral absorption features in quartz in this wavelength region. Quartz-bearing surficial deposits, as well as hydrothermally silicified rocks, were mapped in the TIR bands by using a band 13/band 12 ratio image, which is sensitive to the intensity of the quartz reststrahlen feature. Improved distinction between the quartzose surficial deposits and silicified bedrock was achieved by using matched-filter processing with TIR image spectra for reference. ?? 2006 Elsevier Inc. All rights reserved.

  1. BOREAS Level-3p Landsat TM Imagery: Geocoded and Scaled At-sensor Radiance

    NASA Technical Reports Server (NTRS)

    Nickeson, Jaime; Knapp, David; Newcomer, Jeffrey A.; Hall, Forrest G. (Editor); Cihlar, Josef

    2000-01-01

    For BOReal Ecosystem-Atmosphere Study (BOREAS), the level-3p Landsat Thematic Mapper (TM) data were used to supplement the level-3s Landsat TM products. Along with the other remotely sensed images, the Landsat TM images were collected in order to provide spatially extensive information over the primary study areas. This information includes radiant energy, detailed land cover, and biophysical parameter maps such as Fraction of Photosynthetically Active Radiation (FPAR) and Leaf Area Index (LAI). Although very similar to the level-3s Landsat TM products, the level-3p images were processed with ground control information, which improved the accuracy of the geographic coordinates provided. Geographically, the level-3p images cover the BOREAS Northern Study Area (NSA) and Southern Study Area (SSA). Temporally, the four images cover the period of 20-Aug-1988 to 07-Jun-1994. Except for the 07-Jun-1994 image, which contains seven bands, the other three contain only three bands.

  2. Space Radar Image of the Silk route in Niya, Taklamak, China

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This composite image is of an area thought to contain the ruins of the ancient settlement of Niya. It is located in the southwest corner of the Taklamakan Desert in China's Sinjiang Province. This region was part of some of China's earliest dynasties and from the third century BC on was traversed by the famous Silk Road. The Silk Road, passing east-west through this image, was an ancient trade route that led across Central Asia's desert to Persia, Byzantium and Rome. The multi-frequency, multi-polarized radar imagery was acquired on orbit 106 of the space shuttle Endeavour on April 16, 1994 by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar. The image is centered at 37.78 degrees north latitude and 82.41 degrees east longitude. The area shown is approximately 35 kilometers by 83 kilometers (22 miles by 51 miles). The image is a composite of an image from an Earth-orbiting satellite called Systeme Probatoire d'Observation de la Terre (SPOT)and a SIR-C multi-frequency, multi-polarized radar image. The false-color radar image was created by displaying the C-band (horizontally transmitted and received) return in red, the L-band (horizontally transmitted and received) return in green, and the L-band (horizontally transmitted and vertically received) return in blue. The prominent east/west pink formation at the bottom of the image is most likely a ridge of loosely consolidated sedimentary rock. The Niya River -- the black feature in the lower right of the French satellite image -- meanders north-northeast until it clears the sedimentary ridge, at which point it abruptly turns northwest. Sediment and evaporite deposits left by the river over millennia dominate the center and upper right of the radar image (in light pink). High ground, ridges and dunes are seen among the riverbed meanderings as mottled blue. Through image enhancement and analysis, a new feature probably representing a man-made canal has been discovered and mapped. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: the L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.v.(DLR), the major partner in science, operations and data processing of X-SAR.

  3. Space Radar Image of Houston, Texas

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This image of Houston, Texas, shows the amount of detail that is possible to obtain using spaceborne radar imaging. Images such as this -- obtained by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) flying aboard the space shuttle Endeavor last fall -- can become an effective tool for urban planners who map and monitor land use patterns in urban, agricultural and wetland areas. Central Houston appears pink and white in the upper portion of the image, outlined and crisscrossed by freeways. The image was obtained on October 10, 1994, during the space shuttle's 167th orbit. The area shown is 100 kilometers by 60 kilometers (62 miles by 38 miles) and is centered at 29.38 degrees north latitude, 95.1 degrees west longitude. North is toward the upper left. The pink areas designate urban development while the green-and blue-patterned areas are agricultural fields. Black areas are bodies of water, including Galveston Bay along the right edge and the Gulf of Mexico at the bottom of the image. Interstate 45 runs from top to bottom through the image. The narrow island at the bottom of the image is Galveston Island, with the city of Galveston at its northeast (right) end. The dark cross in the upper center of the image is Hobby Airport. Ellington Air Force Base is visible below Hobby on the other side of Interstate 45. Clear Lake is the dark body of water in the middle right of the image. The green square just north of Clear Lake is Johnson Space Center, home of Mission Control and the astronaut training facilities. The black rectangle with a white center that appears to the left of the city center is the Houston Astrodome. The colors in this image were obtained using the follow radar channels: red represents the L-band (horizontally transmitted, vertically received); green represents the C-band (horizontally transmitted, vertically received); blue represents the C-band (horizontally transmitted and received). Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar(SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI) with the Deutsche Forschungsanstalt fuer luft und Raumfahrt e.V.(DLR), the major partner in science, operations and data processing of X-SAR.

  4. On the Relation Between Facular Bright Points and the Magnetic Field

    NASA Astrophysics Data System (ADS)

    Berger, Thomas; Shine, Richard; Tarbell, Theodore; Title, Alan; Scharmer, Goran

    1994-12-01

    Multi-spectral images of magnetic structures in the solar photosphere are presented. The images were obtained in the summers of 1993 and 1994 at the Swedish Solar Telescope on La Palma using the tunable birefringent Solar Optical Universal Polarimeter (SOUP filter), a 10 Angstroms wide interference filter tuned to 4304 Angstroms in the band head of the CH radical (the Fraunhofer G-band), and a 3 Angstroms wide interference filter centered on the Ca II--K absorption line. Three large format CCD cameras with shuttered exposures on the order of 10 msec and frame rates of up to 7 frames per second were used to create time series of both quiet and active region evolution. The full field--of--view is 60times 80 arcseconds (44times 58 Mm). With the best seeing, structures as small as 0.22 arcseconds (160 km) in diameter are clearly resolved. Post--processing of the images results in rigid coalignment of the image sets to an accuracy comparable to the spatial resolution. Facular bright points with mean diameters of 0.35 arcseconds (250 km) and elongated filaments with lengths on the order of arcseconds (10(3) km) are imaged with contrast values of up to 60 % by the G--band filter. Overlay of these images on contemporal Fe I 6302 Angstroms magnetograms and Ca II K images reveals that the bright points occur, without exception, on sites of magnetic flux through the photosphere. However, instances of concentrated and diffuse magnetic flux and Ca II K emission without associated bright points are common, leading to the conclusion that the presence of magnetic flux is a necessary but not sufficient condition for the occurence of resolvable facular bright points. Comparison of the G--band and continuum images shows a complex relation between structures in the two bandwidths: bright points exceeding 350 km in extent correspond to distinct bright structures in the continuum; smaller bright points show no clear relation to continuum structures. Size and contrast statistical cross--comparisons compiled from measurements of over two-thousand bright point structures are presented. Preliminary analysis of the time evolution of bright points in the G--band reveals that the dominant mode of bright point evolution is fission of larger structures into smaller ones and fusion of small structures into conglomerate structures. The characteristic time scale for the fission/fusion process is on the order of minutes.

  5. GTC/CanariCam Mid-IR Imaging of the Fullerene-rich Planetary Nebula IC 418: Searching for the Spatial Distribution of Fullerene-like Molecules

    NASA Astrophysics Data System (ADS)

    Díaz-Luis, J. J.; García-Hernández, D. A.; Manchado, A.; García-Lario, P.; Villaver, E.; García-Segura, G.

    2018-03-01

    We present seeing-limited narrow-band mid-IR GTC/CanariCam images of the spatially extended fullerene-containing planetary nebula (PN) IC 418. The narrow-band images cover the C60 fullerene band at 17.4 μm, the polycyclic aromatic hydrocarbon like (PAH-like) feature at 11.3 μm, the broad 9–13 μm feature, and their adjacent continua at 9.8 and 20.5 μm. We study the relative spatial distribution of these complex species, all detected in the Spitzer and Infrared Space Observatory spectra of IC 418, with the aim of getting observational constraints to the formation process of fullerenes in H-rich circumstellar environments. A similar ring-like extended structure is seen in all narrow-band filters, except in the dust continuum emission at 9.8 μm, which peaks closer to the central star. The continuum-subtracted images display a clear ring-like extended structure for the carrier of the broad 9–13 μm emission, while the spatial distribution of the (PAH-like) 11.3 μm emission is not so well defined. Interestingly, a residual C60 17.4 μm emission (at about 4σ from the sky background) is seen when subtracting the dust continuum emission at 20.5 μm. This residual C60 emission, if real, might have several interpretations, the most exciting being perhaps that other fullerene-based species like hydrogenated fullerenes with very low H-content may contribute to the observed 17.4 μm emission. We conclude that higher sensitivity mid-IR images and spatially resolved spectroscopic observations (especially in the Q-band) are necessary to get some clues about fullerene formation in PNe.

  6. SSTL UK-DMC SLIM-6 data quality assessment

    USGS Publications Warehouse

    Chander, G.; Saunier, S.; Choate, M.J.; Scaramuzza, P.L.

    2009-01-01

    Satellite data from the Surrey Satellite Technology Limited (SSTL) United Kingdom (UK) Disaster Monitoring Constellation (DMC) were assessed for geometric and radiometric quality. The UK-DMC Surrey Linear Imager 6 (SLIM-6) sensor has a 32-m spatial resolution and a ground swath width of 640 km. The UK-DMC SLIM-6 design consists of a three-band imager with green, red, and near-infrared bands that are set to similar bandpass as Landsat bands 2, 3, and 4. The UK-DMC data consisted of imagery registered to Landsat orthorectified imagery produced from the GeoCover program. Relief displacements within the UK-DMC SLIM-6 imagery were accounted for by using global 1-km digital elevation models available through the Global Land One-km Base Elevation (GLOBE) Project. Positional accuracy and relative band-to-band accuracy were measured. Positional accuracy of the UK-DMC SLIM-6 imagery was assessed by measuring the imagery against digital orthophoto quadrangles (DOQs), which are designed to meet national map accuracy standards at 1 : 24 000 scales; this corresponds to a horizontal root-mean-square accuracy of about 6 m. The UK-DMC SLIM-6 images were typically registered to within 1.0-1.5 pixels to the DOQ mosaic images. Several radiometric artifacts like striping, coherent noise, and flat detector were discovered and studied. Indications are that the SSTL UK-DMC SLIM-6 data have few artifacts and calibration challenges, and these can be adjusted or corrected via calibration and processing algorithms. The cross-calibration of the UK-DMC SLIM-6 and Landsat 7 Enhanced Thematic Mapper Plus was performed using image statistics derived from large common areas observed by the two sensors.

  7. Feasibility study and quality assessment of unmanned aircraft system-derived multispectral images

    NASA Astrophysics Data System (ADS)

    Chang, Kuo-Jen

    2017-04-01

    The purpose of study is to explore the precision and the applicability of UAS-derived multispectral images. In this study, the Micro-MCA6 multispectral camera was mounted on quadcopter. The Micro-MCA6 shoot images synchronized of each single band. By means of geotagged images and control points, the orthomosaic images of each single band generated firstly by 14cm resolution. The multispectral image was merged complete with 6 bands. In order to improve the spatial resolution, the 6 band image fused with 9cm resolution image taken from RGB camera. Quality evaluation of the image is verified of the each single band by using control points and check points. The standard deviations of errors are within 1 to 2 pixel resolution of each band. The quality of the multispectral image is compared with 3 cm resolution orthomosaic RGB image gathered from UAV in the same mission, as well. The standard deviations of errors are within 2 to 3 pixel resolution. The result shows that the errors resulting from the blurry and the band dislocation of the objects edge identification. To the end, the normalized difference vegetation index (NDVI) extracted from the image to explore the condition of vegetation and the nature of the environment. This study demonstrates the feasibility and the capability of the high resolution multispectral images.

  8. Space Radar Image of the Yucatan Impact Crater Site

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a radar image of the southwest portion of the buried Chicxulub impact crater in the Yucatan Peninsula, Mexico. The radar image was acquired on orbit 81 of space shuttle Endeavour on April 14, 1994 by the Spaceborne Imaging Radar C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR). The image is centered at 20 degrees north latitude and 90 degrees west longitude. Scientists believe the crater was formed by an asteroid or comet which slammed into the Earth more than 65 million years ago. It is this impact crater that has been linked to a major biological catastrophe where more than 50 percent of the Earth's species, including the dinosaurs, became extinct. The 180-to 300-kilometer-diameter (110- to 180-mile)crater is buried by 300 to 1,000 meters (1,000 to 3,000 feet) of limestone. The exact size of the crater is currently being debated by scientists. This is a total power radar image with L-band in red, C-band in green, and the difference between C-band L-band in blue. The 10-kilometer-wide (6-mile) band of yellow and pink with blue patches along the top left (northwestern side) of the image is a mangrove swamp. The blue patches are islands of tropical forests created by freshwater springs that emerge through fractures in the limestone bedrock and are most abundant in the vicinity of the buried crater rim. The fracture patterns and wetland hydrology in this region are controlled by the structure of the buried crater. Scientists are using the SIR-C/X-SAR imagery to study wetland ecology and help determine the exact size of the impact crater. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtange-legenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.v.(DLR), the major partner in science, operations, and data processing of X-SAR. Research on the biological effects of the Chicxulub impact is supported by the NASA Exobiology Program.

  9. Landsat Data Continuity Mission Calibration and Validation

    NASA Technical Reports Server (NTRS)

    Markham, Brian L.; Dabney, Philip W.; Storey, James C.; Morfitt, Ron; Knight, Ed; Kvaran, Geir; Lee, Kenton

    2008-01-01

    The primary payload for the Landsat Data Continuity Mission (LDCM) is the Operational Land Imager (OLI), being built by Ball Aerospace and Technologies, under contract to NASA. The OLI has spectral bands similar to the Landsat-7 ETM+, minus the thermal band and with two new bands, a 443 nm band and 1375 nm cirrus detection band. On-board calibration systems include two solar diffusers (routine and pristine), a shutter and three sets of internal lamps (routine, backup and pristine). Being a pushbroom opposed to a whiskbroom design of ETM+, the system poses new challenges for characterization and calibration, chief among them being the large focal plane with 75000+ detectors. A comprehensive characterization and calibration plan is in place for the instrument and the data throughout the mission including Ball, NASA and the United States Geological Survey, which will take over operations of LDCM after on-orbit commissioning. Driving radiometric calibration requirements for OLI data include radiance calibration to 5% uncertainty (1 q); reflectance calibration to 3% uncertainty (1 q) and relative (detector-to-detector) calibration to 0.5% (J (r). Driving geometric calibration requirements for OLI include bandto- band registration of 4.5 meters (90% confidence), absolute geodetic accuracy of 65 meters (90% CE) and relative geodetic accuracy of 25 meters (90% CE). Key spectral, spatial and radiometric characterization of the OLI will occur in thermal vacuum at Ball Aerospace. During commissioning the OLI will be characterized and calibrated using celestial (sun, moon, stars) sources and terrestrial sources. The USGS EROS ground processing system will incorporate an image assessment system similar to Landsat-7 for characterization and calibration. This system will have the added benefit that characterization data will be extracted as part of the normal image data processing, so that the characterization data available will be significantly larger than for Landsat-7 ETM+.

  10. Development of an imaging system for the detection of alumina on turbine blades

    NASA Astrophysics Data System (ADS)

    Greenwell, S. J.; Kell, J.; Day, J. C. C.

    2014-03-01

    An imaging system capable of detecting alumina on turbine blades by acquiring LED-induced fluorescence images has been developed. Acquiring fluorescence images at adjacent spectral bands allows the system to distinguish alumina from fluorescent surface contaminants. Repair and overhaul processes require that alumina is entirely removed from the blades by grit blasting and chemical stripping. The capability of the system to detect alumina has been investigated with two series of turbine blades provided by Rolls-Royce plc. The results illustrate that the system provides a superior inspection method to visual assessment when ascertaining whether alumina is present on turbine blades during repair and overhaul processes.

  11. A Preliminary Investigation of Systematic Noise in Data Acquired with the Airborne Imaging Spectrometer

    NASA Technical Reports Server (NTRS)

    Masuoka, E.

    1985-01-01

    Systematic noise is present in Airborne Imaging Spectrometer (AIS) data collected on October 26, 1983 and May 5, 1984 in grating position 0 (1.2 to 1.5 microns). In the October data set the noise occurs as 135 scan lines of low DN's every 270 scan lines. The noise is particularly bad in bands nine through thirty, restricting effective analysis to at best ten of the 32 bands. In the May data the regions of severe noise have been eliminated, but systematic noise is present with three frequencies (3, 106 and 200 scan lines) in all thirty two bands. The periodic nature of the noise in both data sets suggests that it could be removed as part of routine processing. This is necessary before classification routines or statistical analyses are used with these data.

  12. Landsat-8 Thermal Infrared Sensor (TIRS) Vicarious Radiometric Calibration

    NASA Technical Reports Server (NTRS)

    Barsi, Julia A.; Shott, John R.; Raqueno, Nina G.; Markham, Brian L.; Radocinski, Robert G.

    2014-01-01

    Launched in February 2013, the Landsat-8 carries on-board the Thermal Infrared Sensor (TIRS), a two-band thermal pushbroom imager, to maintain the thermal imaging capability of the Landsat program. The TIRS bands are centered at roughly 10.9 and 12 micrometers (Bands 10 and 11 respectively). They have 100 m spatial resolution and image coincidently with the Operational Land Imager (OLI), also on-board Landsat-8. The TIRS instrument has an internal calibration system consisting of a variable temperature blackbody and a special viewport with which it can see deep space; a two point calibration can be performed twice an orbit. Immediately after launch, a rigorous vicarious calibration program was started to validate the absolute calibration of the system. The two vicarious calibration teams, NASA/Jet Propulsion Laboratory (JPL) and the Rochester Institute of Technology (RIT), both make use of buoys deployed on large water bodies as the primary monitoring technique. RIT took advantage of cross-calibration opportunity soon after launch when Landsat-8 and Landsat-7 were imaging the same targets within a few minutes of each other to perform a validation of the absolute calibration. Terra MODIS is also being used for regular monitoring of the TIRS absolute calibration. The buoy initial results showed a large error in both bands, 0.29 and 0.51 W/sq m·sr·micrometers or -2.1 K and -4.4 K at 300 K in Band 10 and 11 respectively, where TIRS data was too hot. A calibration update was recommended for both bands to correct for a bias error and was implemented on 3 February 2014 in the USGS/EROS processing system, but the residual variability is still larger than desired for both bands (0.12 and 0.2 W/sq m·sr·micrometers or 0.87 and 1.67 K at 300 K). Additional work has uncovered the source of the calibration error: out-of-field stray light. While analysis continues to characterize the stray light contribution, the vicarious calibration work proceeds. The additional data have not changed the statistical assessment but indicate that the correction (particularly in band 11) is probably only valid for a subset of data. While the stray light effect is small enough in Band 10 to make the data useful across a wide array of applications, the effect in Band 11 is larger and the vicarious results suggest that Band 11 data should not be used where absolute calibration is required.

  13. Optical design and system calibration for three-band spectral imaging system with interchangeable filters

    USDA-ARS?s Scientific Manuscript database

    The design and calibration of a three-band image acquisition system was reported. The prototype system developed in this research was a three-band spectral imaging system that acquired two visible (510 and 568 nm) images and a near-infrared (NIR) (800 nm) image simultaneously. The system was proto...

  14. Pre-slip and Localized Strain Band - A Study Based on Large Sample Experiment and DIC

    NASA Astrophysics Data System (ADS)

    Ji, Y.; Zhuo, Y. Q.; Liu, L.; Ma, J.

    2017-12-01

    Meta-instability stage (MIS) is the stage occurs between a fault reaching the peak differential stress and the onset of the final stress drop. It is the crucial stage during which a fault transits from "stick" to "slip". Therefore, if one can quantitatively analyze the spatial and temporal characteristics of the deformation field of a fault at MIS, it will be of great significance both to fault mechanics and earthquake prediction study. In order to do so, a series of stick-slip experiments were conducted using a biaxial servo-controlled pressure machine. Digital images of the sample surfaces were captured by a high speed camera and processed using a digital image correlation method (DIC). If images of a rock sample are acquired before and after deformation, then DIC can be used to infer the displacement and strain fields. In our study, sample images were captured at the rate of 1000 frame per second and the resolution is 2048 by 2048 in pixel. The displacement filed, strain filed and fault displacement were calculated from the captured images. Our data shows that (1) pre-sliding can be a three-stage process, including a relative long and slow first stage at slipping rate of 7.9nm/s, a relatively short and fast second one at rate of 3µm/s and the last stage only last for 0.2s but the slipping rate reached as high as 220µm/s. (2) Localized strain bands were observed nearly perpendicular to the fault. A possible mechanism is that the pre-sliding is distributed heterogeneously along the fault, which means there are relatively adequately sliding segments and the less sliding ones, they become the constrain condition of deformation of the adjacent subregion. The localized deformation band tends to radiate from the discontinuity point of sliding. While the adequately sliding segments are competing with the less sliding ones, the strain bands are evolving accordingly.

  15. Space Radar Image of Niya ruins, Taklamakan desert

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This radar image is of an area thought to contain the ruins of the ancient settlement of Niya. It is located in the southwestern corner of the Taklamakan Desert in China's Sinjiang Province. This oasis was part of the famous Silk Road, an ancient trade route from one of China's earliest capitols, Xian, to the West. The image shows a white linear feature trending diagonally from the upper left to the lower right. Scientists believe this newly discovered feature is a man-made canal which presumably diverted river waters toward the settlement of Niya for irrigation purposes. The image was acquired by the Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the space shuttle Endeavour on its 106th orbit on April 16, 1994, and is centered at 37.78 degrees north latitude and 82.41 degrees east longitude. The false-color radar image was created by displaying the C-band (horizontally transmitted and received) return in red, the L-band (horizontally transmitted and received) return in green, and the L-band (horizontally transmitted and vertically received) return in blue. Areas in mottled white and purple are low-lying floodplains of the Niya River. Dark green and black areas between river courses are higher ridges or dunes confining the water flow. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: the L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtange-legenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstaltfuer Luft und Raumfahrt e.v.(DLR), the major partner in science, operations and data processing of X-SAR.

  16. Directional analysis and filtering for dust storm detection in NOAA-AVHRR imagery

    NASA Astrophysics Data System (ADS)

    Janugani, S.; Jayaram, V.; Cabrera, S. D.; Rosiles, J. G.; Gill, T. E.; Rivera Rivera, N.

    2009-05-01

    In this paper, we propose spatio-spectral processing techniques for the detection of dust storms and automatically finding its transport direction in 5-band NOAA-AVHRR imagery. Previous methods that use simple band math analysis have produced promising results but have drawbacks in producing consistent results when low signal to noise ratio (SNR) images are used. Moreover, in seeking to automate the dust storm detection, the presence of clouds in the vicinity of the dust storm creates a challenge in being able to distinguish these two types of image texture. This paper not only addresses the detection of the dust storm in the imagery, it also attempts to find the transport direction and the location of the sources of the dust storm. We propose a spatio-spectral processing approach with two components: visualization and automation. Both approaches are based on digital image processing techniques including directional analysis and filtering. The visualization technique is intended to enhance the image in order to locate the dust sources. The automation technique is proposed to detect the transport direction of the dust storm. These techniques can be used in a system to provide timely warnings of dust storms or hazard assessments for transportation, aviation, environmental safety, and public health.

  17. Space Radar Image of the Lost City of Ubar

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a radar image of the region around the site of the lost city of Ubar in southern Oman, on the Arabian Peninsula. The ancient city was discovered in 1992 with the aid of remote sensing data. Archeologists believe Ubar existed from about 2800 B.C. to about 300 A.D. and was a remote desert outpost where caravans were assembled for the transport of frankincense across the desert. This image was acquired on orbit 65 of space shuttle Endeavour on April 13, 1994 by the Spaceborne Imaging Radar C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR). The SIR-C image shown is centered at 18.4 degrees north latitude and 53.6 degrees east longitude. The image covers an area about 50 by 100 kilometers (31 miles by 62 miles). The image is constructed from three of the available SIR-C channels and displays L-band, HH (horizontal transmit and receive) data as red, C-band HH as blue, and L-band HV (horizontal transmit, vertical receive) as green. The prominent magenta colored area is a region of large sand dunes, which are bright reflectors at both L-and C-band. The prominent green areas (L-HV) are rough limestone rocks, which form a rocky desert floor. A major wadi, or dry stream bed, runs across the middle of the image and is shown largely in white due to strong radar scattering in all channels displayed (L and C HH, L-HV). The actual site of the fortress of the lost city of Ubar, currently under excavation, is near the Wadi close to the center of the image. The fortress is too small to be detected in this image. However, tracks leading to the site, and surrounding tracks, appear as prominent, but diffuse, reddish streaks. These tracks have been used in modern times, but field investigations show many of these tracks were in use in ancient times as well. Mapping of these tracks on regional remote sensing images was a key to recognizing the site as Ubar in 1992. This image, and ongoing field investigations, will help shed light on a little known early civilization. Spaceborne Imaging Radar-C and X-Band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtange-legenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.v.(DLR), the major partner in science, operations, and data processing of X-SAR.

  18. Space Radar Image of Central Sumatra, Indonesia

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a radar image of the central part of the island of Sumatra in Indonesia that shows how the tropical rainforest typical of this country is being impacted by human activity. Native forest appears in green in this image, while prominent pink areas represent places where the native forest has been cleared. The large rectangular areas have been cleared for palm oil plantations. The bright pink zones are areas that have been cleared since 1989, while the dark pink zones are areas that were cleared before 1989. These radar data were processed as part of an effort to assist oil and gas companies working in the area to assess the environmental impact of both their drilling operations and the activities of the local population. Radar images are useful in these areas because heavy cloud cover and the persistent smoke and haze associated with deforestation have prevented usable visible-light imagery from being acquired since 1989. The dark shapes in the upper right (northeast) corner of the image are a chain of lakes in flat coastal marshes. This image was acquired in October 1994 by the Spaceborne Imaging Radar C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) onboard the space shuttle Endeavour. Environmental changes can be easily documented by comparing this image with visible-light data that were acquired in previous years by the Landsat satellite. The image is centered at 0.9 degrees north latitude and 101.3 degrees east longitude. The area shown is 50 kilometers by 100 kilometers (31 miles by 62 miles). The colors in the image are assigned to different frequencies and polarizations of the radar as follows: red is L-band horizontally transmitted, horizontally received; green is L-band horizontally transmitted, vertically received; blue is L-band vertically transmitted, vertically received. SIR-C/X-SAR, a joint mission of the German, Italian and United States space agencies, is part of NASA's Mission to Planet Earth program.

  19. Space Radar Image of Central Sumatra, Indonesia

    NASA Image and Video Library

    1999-04-15

    This is a radar image of the central part of the island of Sumatra in Indonesia that shows how the tropical rainforest typical of this country is being impacted by human activity. Native forest appears in green in this image, while prominent pink areas represent places where the native forest has been cleared. The large rectangular areas have been cleared for palm oil plantations. The bright pink zones are areas that have been cleared since 1989, while the dark pink zones are areas that were cleared before 1989. These radar data were processed as part of an effort to assist oil and gas companies working in the area to assess the environmental impact of both their drilling operations and the activities of the local population. Radar images are useful in these areas because heavy cloud cover and the persistent smoke and haze associated with deforestation have prevented usable visible-light imagery from being acquired since 1989. The dark shapes in the upper right (northeast) corner of the image are a chain of lakes in flat coastal marshes. This image was acquired in October 1994 by the Spaceborne Imaging Radar C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) onboard the space shuttle Endeavour. Environmental changes can be easily documented by comparing this image with visible-light data that were acquired in previous years by the Landsat satellite. The image is centered at 0.9 degrees north latitude and 101.3 degrees east longitude. The area shown is 50 kilometers by 100 kilometers (31 miles by 62 miles). The colors in the image are assigned to different frequencies and polarizations of the radar as follows: red is L-band horizontally transmitted, horizontally received; green is L-band horizontally transmitted, vertically received; blue is L-band vertically transmitted, vertically received. SIR-C/X-SAR, a joint mission of the German, Italian and United States space agencies, is part of NASA's Mission to Planet Earth program. http://photojournal.jpl.nasa.gov/catalog/PIA01797

  20. Space Radar Image of Central African Gorilla Habitat

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a false-color radar image of Central Africa, showing the Virunga Volcano chain along the borders of Rwanda, Zaire and Uganda. This area is home to the endangered mountain gorillas. This C-band L-band image was acquired on April 12, 1994, on orbit 58 of space shuttle Endeavour by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR). The area is centered at about 1.75 degrees south latitude and 29.5 degrees east longitude. The image covers an area 58 kilometers by 178 kilometers (48 miles by 178 miles). The false-color composite is created by displaying the L-band HH return in red, the L-band HV return in green and the C-band HH return in blue. The dark area in the bottom of the image is Lake Kivu, which forms the border between Zaire (to the left) and Rwanda (to the right). The airport at Goma, Zaire is shown as a dark line just above the lake in the bottom left corner of the image. Volcanic flows from the 1977 eruption of Mt. Nyiragongo are shown just north of the airport. Mt. Nyiragongo is not visible in this image because it is located just to the left of the image swath. Very fluid lava flows from the 1977 eruption killed 70 people. Mt. Nyiragongo is currently erupting (August 1994) and will be a target of observation during the second flight of SIR-C/X-SAR. The large volcano in the center of the image is Mt. Karisimbi (4,500 meters or 14,800 feet). This radar image highlights subtle differences in the vegetation and volcanic flows of the region. The faint lines shown in the purple regions are believed to be the result of agriculture terracing by the people who live in the region. The vegetation types are an important factor in the habitat of the endangered mountain gorillas. Researchers at Rutgers University in New Jersey and the Dian Fossey Gorilla Fund in London will use this data to produce vegetation maps of the area to aid in their study of the remaining 650 gorillas in the region. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.v. (DLR), the major partner in science, operations and data processing of X-SAR.

  1. Space Radar Image of San Francisco, California

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a radar image of San Francisco, California, taken on October 3,1994. The image is about 40 kilometers by 55 kilometers (25 miles by 34 miles) with north toward the upper right. Downtown San Francisco is visible in the center of the image with the city of Oakland east (to the right) across San Francisco Bay. Also visible in the image is the Golden Gate Bridge (left center) and the Bay Bridge connecting San Francisco and Oakland. North of the Bay Bridge is Treasure Island. Alcatraz Island appears as a small dot northwest of Treasure Island. This image was acquired by the Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the space shuttle Endeavour on orbit 56. The image is centered at 37 degrees north latitude, 122degrees west longitude. This single-frequency SIR-C image was obtained by the L-band (24 cm) radar channel, horizontally transmitted and received. Portions of the Pacific Ocean visible in this image appear very dark as do other smooth surfaces such as airport runways. Suburban areas, with the low-density housing and tree-lined streets that are typical of San Francisco, appear as lighter gray. Areas with high-rise buildings, such as those seen in the downtown areas, appear in very bright white, showing a higher density of housing and streets which run parallel to the radar flight track. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: the L-band (24 cm), C-band (6 cm) and X-band (3cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.V. (DLR), the major partner in science, operations and data processing of X-SAR.

  2. Space Radar Image of Mammoth, California

    NASA Technical Reports Server (NTRS)

    1999-01-01

    These two images were created using data from the Spaceborne Imaging Radar C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR). The image on the left is a false-color composite of the Mammoth Mountain area in California's Sierra Nevada Mountains centered at 37.6 degrees north, 119.0 degrees west. It was acquired on-board the space shuttle Endeavour on its 67th orbit on April 13, 1994. In the image on the left, red is C-band HV-polarization, green is C-band HH-polarization and blue is the ratio of C-band VV-polarization to C-band HV-polarization. On the right is a classification map of the surface features which was developed by SIR-C/X-SAR science team members at the University of California, Santa Barbara. The area is about 23 by 46 kilometers (14 by 29 miles). In the classification image, the colors represent the following surfaces: White snow Red frozen lake, covered by snow Brown bare ground Blue lake (open water) Yellow short vegetation (mainly brush) Green sparse forest Dark green dense forest Maps like this one are helpful to scientists studying snow wetness and snow water equivalent in the snow pack. Across the globe, over major portions of the middle and high latitudes, and at high elevations in the tropical latitudes, snow and alpine glaciers are the largest contributors to run-off in rivers and to ground-water recharge. Snow hydrologists are using radar in an attempt to estimate both the quantity of water held by seasonal snow packs and the timing of snow melt. Snow and ice also play important roles in regional climates; understanding the processes in seasonal snow cover is also important for studies of the chemical balance of alpine drainage basins. SIR-C/X-SAR is a powerful tool because it is sensitive to most snow pack conditions and is less influenced by weather conditions than other remote sensing instruments, such as the Landsat satellite. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.v. (DLR), the major partner in science, operations and data processing of X-SAR.

  3. Compression of hyper-spectral images using an accelerated nonnegative tensor decomposition

    NASA Astrophysics Data System (ADS)

    Li, Jin; Liu, Zilong

    2017-12-01

    Nonnegative tensor Tucker decomposition (NTD) in a transform domain (e.g., 2D-DWT, etc) has been used in the compression of hyper-spectral images because it can remove redundancies between spectrum bands and also exploit spatial correlations of each band. However, the use of a NTD has a very high computational cost. In this paper, we propose a low complexity NTD-based compression method of hyper-spectral images. This method is based on a pair-wise multilevel grouping approach for the NTD to overcome its high computational cost. The proposed method has a low complexity under a slight decrease of the coding performance compared to conventional NTD. We experimentally confirm this method, which indicates that this method has the less processing time and keeps a better coding performance than the case that the NTD is not used. The proposed approach has a potential application in the loss compression of hyper-spectral or multi-spectral images

  4. Phase Diversity Applied to Sunspot Observations

    NASA Astrophysics Data System (ADS)

    Tritschler, A.; Schmidt, W.; Knolker, M.

    We present preliminary results of a multi-colour phase diversity experiment carried out with the Multichannel Filter System of the Vacuum Tower Telescope at the Observatorio del Teide on Tenerife. We apply phase-diversity imaging to a time sequence of sunspot filtergrams taken in three continuum bands and correct the seeing influence for each image. A newly developed phase diversity device allowing for the projection of both the focused and the defocused image onto a single CCD chip was used in one of the wavelength channels. With the information about the wavefront obtained by the image reconstruction algorithm the restoration of the other two bands can be performed as well. The processed and restored data set will then be used to derive the temperature and proper motion of the umbral dots. Data analysis is still under way, and final results will be given in a forthcoming article.

  5. Reversible integer wavelet transform for blind image hiding method

    PubMed Central

    Bibi, Nargis; Mahmood, Zahid; Akram, Tallha; Naqvi, Syed Rameez

    2017-01-01

    In this article, a blind data hiding reversible methodology to embed the secret data for hiding purpose into cover image is proposed. The key advantage of this research work is to resolve the privacy and secrecy issues raised during the data transmission over the internet. Firstly, data is decomposed into sub-bands using the integer wavelets. For decomposition, the Fresnelet transform is utilized which encrypts the secret data by choosing a unique key parameter to construct a dummy pattern. The dummy pattern is then embedded into an approximated sub-band of the cover image. Our proposed method reveals high-capacity and great imperceptibility of the secret embedded data. With the utilization of family of integer wavelets, the proposed novel approach becomes more efficient for hiding and retrieving process. It retrieved the secret hidden data from the embedded data blindly, without the requirement of original cover image. PMID:28498855

  6. Parameter Estimation and Image Reconstruction of Rotating Targets with Vibrating Interference in the Terahertz Band

    NASA Astrophysics Data System (ADS)

    Yang, Qi; Deng, Bin; Wang, Hongqiang; Qin, Yuliang

    2017-07-01

    Rotation is one of the typical micro-motions of radar targets. In many cases, rotation of the targets is always accompanied with vibrating interference, and it will significantly affect the parameter estimation and imaging, especially in the terahertz band. In this paper, we propose a parameter estimation method and an image reconstruction method based on the inverse Radon transform, the time-frequency analysis, and its inverse. The method can separate and estimate the rotating Doppler and the vibrating Doppler simultaneously and can obtain high-quality reconstructed images after vibration compensation. In addition, a 322-GHz radar system and a 25-GHz commercial radar are introduced and experiments on rotating corner reflectors are carried out in this paper. The results of the simulation and experiments verify the validity of the methods, which lay a foundation for the practical processing of the terahertz radar.

  7. Deep Keck u-Band Imaging of the Hubble Ultra Deep Field: A Catalog of z ~ 3 Lyman Break Galaxies

    NASA Astrophysics Data System (ADS)

    Rafelski, Marc; Wolfe, Arthur M.; Cooke, Jeff; Chen, Hsiao-Wen; Armandroff, Taft E.; Wirth, Gregory D.

    2009-10-01

    We present a sample of 407 z ~ 3 Lyman break galaxies (LBGs) to a limiting isophotal u-band magnitude of 27.6 mag in the Hubble Ultra Deep Field. The LBGs are selected using a combination of photometric redshifts and the u-band drop-out technique enabled by the introduction of an extremely deep u-band image obtained with the Keck I telescope and the blue channel of the Low Resolution Imaging Spectrometer. The Keck u-band image, totaling 9 hr of integration time, has a 1σ depth of 30.7 mag arcsec-2, making it one of the most sensitive u-band images ever obtained. The u-band image also substantially improves the accuracy of photometric redshift measurements of ~50% of the z ~ 3 LBGs, significantly reducing the traditional degeneracy of colors between z ~ 3 and z ~ 0.2 galaxies. This sample provides the most sensitive, high-resolution multi-filter imaging of reliably identified z ~ 3 LBGs for morphological studies of galaxy formation and evolution and the star formation efficiency of gas at high redshift.

  8. DISCRIMINATION OF GRANITOIDS AND MINERALIZED GRANITOIDS IN THE MIDYAN REGION, NORTHWESTERN ARABIAN SHIELD, SAUDI ARABIA, BY LANDSAT MSS DATA-ANALYSIS.

    USGS Publications Warehouse

    Davis, Philip A.; Grolier, Maurice J.

    1984-01-01

    Landsat multispectral scanner (MSS) band and band-ratio databases of two scenes covering the Midyan region of northwestern Saudi Arabia were examined quantitatively and qualitatively to determine which databases best discriminate the geologic units of this semi-arid and arid region. Unsupervised, linear-discriminant cluster-analysis was performed on these two band-ratio combinations and on the MSS bands for both scenes. The results for granitoid-rock discrimination indicated that the classification images using the MSS bands are superior to the band-ratio classification images for two reasons, discussed in the paper. Yet, the effects of topography and material type (including desert varnish) on the MSS-band data produced ambiguities in the MSS-band classification results. However, these ambiguities were clarified by using a simulated natural-color image in conjunction with the MSS-band classification image.

  9. The influence of processor focus on speckle correlation statistics for a Shuttle imaging radar scene of Hurricane Josephine

    NASA Technical Reports Server (NTRS)

    Tilley, David G.

    1988-01-01

    The surface wave field produced by Hurricane Josephine was imaged by the L-band SAR aboard the Challenger on October 12, 1984. Exponential trends found in the two-dimensional autocorrelations of speckled image data support an equilibrium theory model of sea surface hydrodynamics. The notions of correlated specular reflection, surface coherence, optimal Doppler parameterization and spatial resolution are discussed within the context of a Poisson-Rayleigh statistical model of the SAR imaging process.

  10. Automatic building identification under bomb damage conditions

    NASA Astrophysics Data System (ADS)

    Woodley, Robert; Noll, Warren; Barker, Joseph; Wunsch, Donald C., II

    2009-05-01

    Given the vast amount of image intelligence utilized in support of planning and executing military operations, a passive automated image processing capability for target identification is urgently required. Furthermore, transmitting large image streams from remote locations would quickly use available band width (BW) precipitating the need for processing to occur at the sensor location. This paper addresses the problem of automatic target recognition for battle damage assessment (BDA). We utilize an Adaptive Resonance Theory approach to cluster templates of target buildings. The results show that the network successfully classifies targets from non-targets in a virtual test bed environment.

  11. Observation of the human body thermoregulation and extraction of its vein signature using NIR and MWIR imaging

    NASA Astrophysics Data System (ADS)

    Bouzida, Nabila; Bendada, Abdelhakim; Maldague, Xavier P.

    2009-05-01

    The article aims first to present a new study on the thermal regulatory response of the human skin surface while exposed to a cold environment. Our work has shown that when a cold stress is applied to the left hand, thermal infrared imaging (MWIR spectral band: 3-5 μm) allows a clear observation of a temperature rise on the right hand. Moreover, a frequency analysis was also carried out upon selected vein pixels of the images monitored during the same cold stress experiment. The objective was to identify the specific frequencies that could be linked to some physiological mechanisms of the human body. This kind of study could be very useful for the characterization of possible thermo-physiological pathologies. Besides thermoregulation, we also present in this article some results on the extraction of the hand vein pattern. Firstly, we show some vein extraction results obtained after image processing of the thermal images recorded in the thermal band (MWIR), then we compare this vein pattern to the signature obtained with a camera operating in the NIR spectral band (0.85-1.7 μm). This method could be used as a complementary means for finger print signatures in biometrics.

  12. Using hyperspectral remote sensing for land cover classification

    NASA Astrophysics Data System (ADS)

    Zhang, Wendy W.; Sriharan, Shobha

    2005-01-01

    This project used hyperspectral data set to classify land cover using remote sensing techniques. Many different earth-sensing satellites, with diverse sensors mounted on sophisticated platforms, are currently in earth orbit. These sensors are designed to cover a wide range of the electromagnetic spectrum and are generating enormous amounts of data that must be processed, stored, and made available to the user community. The Airborne Visible-Infrared Imaging Spectrometer (AVIRIS) collects data in 224 bands that are approximately 9.6 nm wide in contiguous bands between 0.40 and 2.45 mm. Hyperspectral sensors acquire images in many, very narrow, contiguous spectral bands throughout the visible, near-IR, and thermal IR portions of the spectrum. The unsupervised image classification procedure automatically categorizes the pixels in an image into land cover classes or themes. Experiments on using hyperspectral remote sensing for land cover classification were conducted during the 2003 and 2004 NASA Summer Faculty Fellowship Program at Stennis Space Center. Research Systems Inc.'s (RSI) ENVI software package was used in this application framework. In this application, emphasis was placed on: (1) Spectrally oriented classification procedures for land cover mapping, particularly, the supervised surface classification using AVIRIS data; and (2) Identifying data endmembers.

  13. Landsat 7 thermal-IR image sharpening using an artificial neural network and sensor model

    USGS Publications Warehouse

    Lemeshewsky, G.P.; Schowengerdt, R.A.; ,

    2001-01-01

    The enhanced thematic mapper (plus) (ETM+) instrument on Landsat 7 shares the same basic design as the TM sensors on Landsats 4 and 5, with some significant improvements. In common are six multispectral bands with a 30-m ground-projected instantaneous field of view (GIFOV). However, the thermaL-IR (TIR) band now has a 60-m GIFOV, instead of 120-m. Also, a 15-m panchromatic band has been added. The artificial neural network (NN) image sharpening method described here uses data from the higher spatial resolution ETM+ bands to enhance (sharpen) the spatial resolution of the TIR imagery. It is based on an assumed correlation over multiple scales of resolution, between image edge contrast patterns in the TIR band and several other spectral bands. A multilayer, feedforward NN is trained to approximate TIR data at 60m, given degraded (from 30-m to 60-m) spatial resolution input from spectral bands 7, 5, and 2. After training, the NN output for full-resolution input generates an approximation of a TIR image at 30-m resolution. Two methods are used to degrade the spatial resolution of the imagery used for NN training, and the corresponding sharpening results are compared. One degradation method uses a published sensor transfer function (TF) for Landsat 5 to simulate sensor coarser resolution imagery from higher resolution imagery. For comparison, the second degradation method is simply Gaussian low pass filtering and subsampling, wherein the Gaussian filter approximates the full width at half maximum amplitude characteristics of the TF-based spatial filter. Two fixed-size NNs (that is, number of weights and processing elements) were trained separately with the degraded resolution data, and the sharpening results compared. The comparison evaluates the relative influence of the degradation technique employed and whether or not it is desirable to incorporate a sensor TF model. Preliminary results indicate some improvements for the sensor model-based technique. Further evaluation using a higher resolution reference image and strict application of sensor model to data is recommended.

  14. Improving multispectral satellite image compression using onboard subpixel registration

    NASA Astrophysics Data System (ADS)

    Albinet, Mathieu; Camarero, Roberto; Isnard, Maxime; Poulet, Christophe; Perret, Jokin

    2013-09-01

    Future CNES earth observation missions will have to deal with an ever increasing telemetry data rate due to improvements in resolution and addition of spectral bands. Current CNES image compressors implement a discrete wavelet transform (DWT) followed by a bit plane encoding (BPE) but only on a mono spectral basis and do not profit from the multispectral redundancy of the observed scenes. Recent CNES studies have proven a substantial gain on the achievable compression ratio, +20% to +40% on selected scenarios, by implementing a multispectral compression scheme based on a Karhunen Loeve transform (KLT) followed by the classical DWT+BPE. But such results can be achieved only on perfectly registered bands; a default of registration as low as 0.5 pixel ruins all the benefits of multispectral compression. In this work, we first study the possibility to implement a multi-bands subpixel onboard registration based on registration grids generated on-the-fly by the satellite attitude control system and simplified resampling and interpolation techniques. Indeed bands registration is usually performed on ground using sophisticated techniques too computationally intensive for onboard use. This fully quantized algorithm is tuned to meet acceptable registration performances within stringent image quality criteria, with the objective of onboard real-time processing. In a second part, we describe a FPGA implementation developed to evaluate the design complexity and, by extrapolation, the data rate achievable on a spacequalified ASIC. Finally, we present the impact of this approach on the processing chain not only onboard but also on ground and the impacts on the design of the instrument.

  15. Temperature-Dependent Photoluminescence Imaging and Characterization of a Multi-Crystalline Silicon Solar Cell Defect Area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, S.; Yan, F.; Li, J.

    2011-01-01

    Photoluminescence (PL) imaging is used to detect areas in multi-crystalline silicon that appear dark in band-to-band imaging due to high recombination. Steady-state PL intensity can be correlated to effective minority-carrier lifetime, and its temperature dependence can provide additional lifetime-limiting defect information. An area of high defect density has been laser cut from a multi-crystalline silicon solar cell. Both band-to-band and defect-band PL imaging have been collected as a function of temperature from {approx}85 to 350 K. Band-to-band luminescence is collected by an InGaAs camera using a 1200-nm short-pass filter, while defect band luminescence is collected using a 1350-nm long passmore » filter. The defect band luminescence is characterized by cathodoluminescence. Small pieces from adjacent areas within the same wafer are measured by deep-level transient spectroscopy (DLTS). DLTS detects a minority-carrier electron trap level with an activation energy of 0.45 eV on the sample that contained defects as seen by imaging.« less

  16. Temperature-Dependent Photoluminescence Imaging and Characterization of a Multi-Crystalline Silicon Solar Cell Defect Area: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, S.; Yan, F.; Li, J.

    2011-07-01

    Photoluminescence (PL) imaging is used to detect areas in multi-crystalline silicon that appear dark in band-to-band imaging due to high recombination. Steady-state PL intensity can be correlated to effective minority-carrier lifetime, and its temperature dependence can provide additional lifetime-limiting defect information. An area of high defect density has been laser cut from a multi-crystalline silicon solar cell. Both band-to-band and defect-band PL imaging have been collected as a function of temperature from ~85 to 350 K. Band-to-band luminescence is collected by an InGaAs camera using a 1200-nm short-pass filter, while defect band luminescence is collected using a 1350-nm long passmore » filter. The defect band luminescence is characterized by cathodo-luminescence. Small pieces from adjacent areas within the same wafer are measured by deep-level transient spectroscopy (DLTS). DLTS detects a minority-carrier electron trap level with an activation energy of 0.45 eV on the sample that contained defects as seen by imaging.« less

  17. Space Radar Image of Mississippi Delta

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a radar image of the Mississippi River Delta where the river enters into the Gulf of Mexico along the coast of Louisiana. This multi-frequency image demonstrates the capability of the radar to distinguish different types of wetlands surfaces in river deltas. This image was acquired by the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the space shuttle Endeavour on October 2, 1995. The image is centered on latitude 29.3 degrees North latitude and 89.28 degrees West longitude. The area shown is approximately 63 kilometers by 43 kilometers (39 miles by 26 miles). North is towards the upper right of the image. As the river enters the Gulf of Mexico, it loses energy and dumps its load of sediment that it has carried on its journey through the mid-continent. This pile of sediment, or mud, accumulates over the years building up the delta front. As one part of the delta becomes clogged with sediment, the delta front will migrate in search of new areas to grow. The area shown on this image is the currently active delta front of the Mississippi. The migratory nature of the delta forms natural traps for oil and the numerous bright spots along the outside of the delta are drilling platforms. Most of the land in the image consists of mud flats and marsh lands. There is little human settlement in this area due to the instability of the sediments. The main shipping channel of the Mississippi River is the broad red stripe running northwest to southeast down the left side of the image. The bright spots within the channel are ships. The colors in the image are assigned to different frequencies and polarizations of the radar as follows: red is L-band vertically transmitted, vertically received; green is C-band vertically transmitted, vertically received; blue is X-band vertically transmitted, vertically received. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.v.(DLR), the major partner in science, operations, and data processing of X-SAR.

  18. Infrared and Visible Image Fusion Based on Different Constraints in the Non-Subsampled Shearlet Transform Domain.

    PubMed

    Huang, Yan; Bi, Duyan; Wu, Dongpeng

    2018-04-11

    There are many artificial parameters when fuse infrared and visible images, to overcome the lack of detail in the fusion image because of the artifacts, a novel fusion algorithm for infrared and visible images that is based on different constraints in non-subsampled shearlet transform (NSST) domain is proposed. There are high bands and low bands of images that are decomposed by the NSST. After analyzing the characters of the bands, fusing the high level bands by the gradient constraint, the fused image can obtain more details; fusing the low bands by the constraint of saliency in the images, the targets are more salient. Before the inverse NSST, the Nash equilibrium is used to update the coefficient. The fused images and the quantitative results demonstrate that our method is more effective in reserving details and highlighting the targets when compared with other state-of-the-art methods.

  19. Infrared and Visible Image Fusion Based on Different Constraints in the Non-Subsampled Shearlet Transform Domain

    PubMed Central

    Huang, Yan; Bi, Duyan; Wu, Dongpeng

    2018-01-01

    There are many artificial parameters when fuse infrared and visible images, to overcome the lack of detail in the fusion image because of the artifacts, a novel fusion algorithm for infrared and visible images that is based on different constraints in non-subsampled shearlet transform (NSST) domain is proposed. There are high bands and low bands of images that are decomposed by the NSST. After analyzing the characters of the bands, fusing the high level bands by the gradient constraint, the fused image can obtain more details; fusing the low bands by the constraint of saliency in the images, the targets are more salient. Before the inverse NSST, the Nash equilibrium is used to update the coefficient. The fused images and the quantitative results demonstrate that our method is more effective in reserving details and highlighting the targets when compared with other state-of-the-art methods. PMID:29641505

  20. Multiband super-resolution imaging of graded-index photonic crystal flat lens

    NASA Astrophysics Data System (ADS)

    Xie, Jianlan; Wang, Junzhong; Ge, Rui; Yan, Bei; Liu, Exian; Tan, Wei; Liu, Jianjun

    2018-05-01

    Multiband super-resolution imaging of point source is achieved by a graded-index photonic crystal flat lens. With the calculations of six bands in common photonic crystal (CPC) constructed with scatterers of different refractive indices, it can be found that the super-resolution imaging of point source can be realized by different physical mechanisms in three different bands. In the first band, the imaging of point source is based on far-field condition of spherical wave while in the second band, it is based on the negative effective refractive index and exhibiting higher imaging quality than that of the CPC. However, in the fifth band, the imaging of point source is mainly based on negative refraction of anisotropic equi-frequency surfaces. The novel method of employing different physical mechanisms to achieve multiband super-resolution imaging of point source is highly meaningful for the field of imaging.

  1. HMM for hyperspectral spectrum representation and classification with endmember entropy vectors

    NASA Astrophysics Data System (ADS)

    Arabi, Samir Y. W.; Fernandes, David; Pizarro, Marco A.

    2015-10-01

    The Hyperspectral images due to its good spectral resolution are extensively used for classification, but its high number of bands requires a higher bandwidth in the transmission data, a higher data storage capability and a higher computational capability in processing systems. This work presents a new methodology for hyperspectral data classification that can work with a reduced number of spectral bands and achieve good results, comparable with processing methods that require all hyperspectral bands. The proposed method for hyperspectral spectra classification is based on the Hidden Markov Model (HMM) associated to each Endmember (EM) of a scene and the conditional probabilities of each EM belongs to each other EM. The EM conditional probability is transformed in EM vector entropy and those vectors are used as reference vectors for the classes in the scene. The conditional probability of a spectrum that will be classified is also transformed in a spectrum entropy vector, which is classified in a given class by the minimum ED (Euclidian Distance) among it and the EM entropy vectors. The methodology was tested with good results using AVIRIS spectra of a scene with 13 EM considering the full 209 bands and the reduced spectral bands of 128, 64 and 32. For the test area its show that can be used only 32 spectral bands instead of the original 209 bands, without significant loss in the classification process.

  2. Synthesis of Multispectral Bands from Hyperspectral Data: Validation Based on Images Acquired by AVIRIS, Hyperion, ALI, and ETM+

    NASA Technical Reports Server (NTRS)

    Blonksi, Slawomir; Gasser, Gerald; Russell, Jeffrey; Ryan, Robert; Terrie, Greg; Zanoni, Vicki

    2001-01-01

    Multispectral data requirements for Earth science applications are not always studied rigorously studied before a new remote sensing system is designed. A study of the spatial resolution, spectral bandpasses, and radiometric sensitivity requirements of real-world applications would focus the design onto providing maximum benefits to the end-user community. To support systematic studies of multispectral data requirements, the Applications Research Toolbox (ART) has been developed at NASA's Stennis Space Center. The ART software allows users to create and assess simulated datasets while varying a wide range of system parameters. The simulations are based on data acquired by existing multispectral and hyperspectral instruments. The produced datasets can be further evaluated for specific end-user applications. Spectral synthesis of multispectral images from hyperspectral data is a key part of the ART software. In this process, hyperspectral image cubes are transformed into multispectral imagery without changes in spatial sampling and resolution. The transformation algorithm takes into account spectral responses of both the synthesized, broad, multispectral bands and the utilized, narrow, hyperspectral bands. To validate the spectral synthesis algorithm, simulated multispectral images are compared with images collected near-coincidentally by the Landsat 7 ETM+ and the EO-1 ALI instruments. Hyperspectral images acquired with the airborne AVIRIS instrument and with the Hyperion instrument onboard the EO-1 satellite were used as input data to the presented simulations.

  3. Space Radar Image of Long Valley, California in 3-D

    NASA Image and Video Library

    1999-05-01

    This three-dimensional perspective view of Long Valley, California was created from data taken by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This image was constructed by overlaying a color composite SIR-C radar image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. The interferometry data were acquired on April 13,1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR instrument. The color composite radar image was taken in October and was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is the large dark feature in the foreground. http://photojournal.jpl.nasa.gov/catalog/PIA01769

  4. Thematic mapper: detailed radiometric and geometric characteristics

    USGS Publications Warehouse

    Kieffer, Hugh

    1983-01-01

    Those radiometric characteristics of the Landsat 4 Thematic Mapper (TM) that can be established without absolute calibration of spectral data have been examined. Subscenes of radiometric all raw data (B-data) were examined on an individual detector basis: areas of uniform radiance were used to characterize subtle radiometric differences and noise problems. A variety of anomalies have been discovered with magnitude of a few digital levels or less: the only problem not addressable by ground processing is irregular width of the digital levels. Essentially all of this non-ideal performance is incorporated in the fully processed (P-type) images, but disguised by the geometric resampling procedure. The overall performance of the Thematic Mapper is a great improvement over previous Landsat scanners. The effective resolution in radiance is degraded by about a factor of two by irregular width of the digital levels. Several detectors have a change of gain with a period of several scans, the largest effect is about 4%. These detectors appear to switch between two response levels during scan direction reversal; there is no apparent periodicity to these changes. This can cause small apparent difference between forward and reverse scans for portions of an image. The high-frequency noise level of each detector was characterized by the standard deviation of the first derivative in the sample direction across a flat field. Coherent sinusoidal noise patterns were determined using one-dimensional Fourier transforms. A "stitching" pattern in Band 1 has a period of 13.8 samples with a peak-to-peak amplitude ranging from 1 to 5 DN. Noise with a period of 3.24 samples is pronounced for most detectors in band 1, to a lesser extent in bands 2, 3, and 4, and below background noise levels in bands 5, 6, and 7. The geometric fidelity of the GSFC film writer used for Thematic Mapper (TM) images was assessed by measurement with accuracy bette than three micrometers of a test grid. A set of 55 control points with known UTM coordinates was measured on a digital display of part of band 5 of the TM image of the Washington, D.C. area and fitted to the control points. The standard error of the fit of the TM image to the control is 37 meters, or 1.3 pixels, with no consistent distortion. These test indicate that the geometric fidelity of TM images is likely to be higher than the ability of film recorders to reproduce the images.

  5. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    PubMed Central

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  6. Space Radar Image of Kilauea, Hawaii

    NASA Image and Video Library

    1999-01-27

    This color composite C-band and L-band image of the Kilauea volcano on the Big Island of Hawaii was acquired by NASA Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar SIR-C/X-SAR flying on space shuttle Endeavour.

  7. Measurement of the Band-to-Band Registration of the SNPP VIIRS Imaging System from On-Orbit Data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Lin, Guoqing; Tan, Bin

    2016-01-01

    The Visible Infrared Imaging Radiometer Suite (VIIRS) instrument was launched 28 October 2011 onboard the Suomi National Polar-orbiting Partnership (SNPP) satellite. The VIIRS instrument is a whiskbroom system with 22 spectral and thermal bands split between 16 moderate resolution bands (M-bands), five imagery resolution bands (I-bands) and a day-night band. In this study we measure the along-scan and along-track band-to-band registration between the I-bands and M-bands from on-orbit data. This measurement is performed by computing the Normalized Mutual Information (NMI) between shifted image band pairs and finding the amount of shift required (if any) to produce the peak in NMI value. Subpixel accuracy is obtained by utilizing bicubic interpolation. Registration shifts are found to be similar to pre-launch measurements and stable (within measurement error) over the instruments first four years in orbit.

  8. A Framework of Hyperspectral Image Compression using Neural Networks

    DOE PAGES

    Masalmah, Yahya M.; Martínez Nieves, Christian; Rivera Soto, Rafael; ...

    2015-01-01

    Hyperspectral image analysis has gained great attention due to its wide range of applications. Hyperspectral images provide a vast amount of information about underlying objects in an image by using a large range of the electromagnetic spectrum for each pixel. However, since the same image is taken multiple times using distinct electromagnetic bands, the size of such images tend to be significant, which leads to greater processing requirements. The aim of this paper is to present a proposed framework for image compression and to study the possible effects of spatial compression on quality of unmixing results. Image compression allows usmore » to reduce the dimensionality of an image while still preserving most of the original information, which could lead to faster image processing. Lastly, this paper presents preliminary results of different training techniques used in Artificial Neural Network (ANN) based compression algorithm.« less

  9. Defocusing effects of lensless ghost imaging and ghost diffraction with partially coherent sources

    NASA Astrophysics Data System (ADS)

    Zhou, Shuang-Xi; Sheng, Wei; Bi, Yu-Bo; Luo, Chun-Ling

    2018-04-01

    The defocusing effect is inevitable and degrades the image quality in the conventional optical imaging process significantly due to the close confinement of the imaging lens. Based on classical optical coherent theory and linear algebra, we develop a unified formula to describe the defocusing effects of both lensless ghost imaging (LGI) and lensless ghost diffraction (LGD) systems with a partially coherent source. Numerical examples are given to illustrate the influence of defocusing length on the quality of LGI and LGD. We find that the defocusing effects of the test and reference paths in the LGI or LGD systems are entirely different, while the LGD system is more robust against defocusing than the LGI system. Specifically, we find that the imaging process for LGD systems can be viewed as pinhole imaging, which may find applications in ultra-short-wave band imaging without imaging lenses, e.g. x-ray diffraction and γ-ray imaging.

  10. Space Radar Image of Mammoth Mountain, California

    NASA Image and Video Library

    1999-05-01

    These two false-color composite images of the Mammoth Mountain area in the Sierra Nevada Mountains, Calif., show significant seasonal changes in snow cover. The image at left was acquired by the Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar aboard the space shuttle Endeavour on its 67th orbit on April 13, 1994. The image is centered at 37.6 degrees north latitude and 119 degrees west longitude. The area is about 36 kilometers by 48 kilometers (22 miles by 29 miles). In this image, red is L-band (horizontally transmitted and vertically received) polarization data; green is C-band (horizontally transmitted and vertically received) polarization data; and blue is C-band (horizontally transmitted and received) polarization data. The image at right was acquired on October 3, 1994, on the space shuttle Endeavour's 67th orbit of the second radar mission. Crowley Lake appears dark at the center left of the image, just above or south of Long Valley. The Mammoth Mountain ski area is visible at the top right of the scene. The red areas correspond to forests, the dark blue areas are bare surfaces and the green areas are short vegetation, mainly brush. The changes in color tone at the higher elevations (e.g. the Mammoth Mountain ski area) from green-blue in April to purple in September reflect changes in snow cover between the two missions. The April mission occurred immediately following a moderate snow storm. During the mission the snow evolved from a dry, fine-grained snowpack with few distinct layers to a wet, coarse-grained pack with multiple ice inclusions. Since that mission, all snow in the area has melted except for small glaciers and permanent snowfields on the Silver Divide and near the headwaters of Rock Creek. On October 3, 1994, only discontinuous patches of snow cover were present at very high elevations following the first snow storm of the season on September 28, 1994. For investigations in hydrology and land-surface climatology, seasonal snow cover and alpine glaciers are critical to the radiation and water balances. SIR-C/X-SAR is a powerful tool because it is sensitive to most snowpack conditions and is less influenced by weather conditions than other remote sensing instruments, such as Landsat. In parallel with the operational SIR-C data processing, an experimental effort is being conducted to test SAR data processing using the Jet Propulsion Laboratory's massively parallel supercomputing facility, centered around the Cray Research T3D. These experiments will assess the abilities of large supercomputers to produce high throughput SAR processing in preparation for upcoming data-intensive SAR missions. The images released here were produced as part of this experimental effort. http://photojournal.jpl.nasa.gov/catalog/PIA01753

  11. Multispectral Digital Image Analysis of Varved Sediments in Thin Sections

    NASA Astrophysics Data System (ADS)

    Jäger, K.; Rein, B.; Dietrich, S.

    2006-12-01

    An update of the recently developed method COMPONENTS (Rein, 2003, Rein & Jäger, subm.) for the discrimination of sediment components in thin sections is presented here. COMPONENTS uses a 6-band (multispectral) image analysis. To derive six-band spectral information of the sediments, thin sections are scanned with a digital camera mounted on a polarizing microscope. The thin sections are scanned twice, under polarized and under unpolarized plain light. During each run RGB images are acquired which are subsequently stacked to a six-band file. The first three bands (Blue=1, Green=2, Red=3) result from the spectral behaviour in the blue, green and red band with unpolarized light conditions, and the bands 4 to 6 (Blue=4, Green=5, Red=6) from the polarized light run. The next step is the discrimination of the sediment components by their transmission behaviour. Automatic classification algorithms broadly used in remote sensing applications cannot be used due to unavoidable variations of sediment particle or thin section thicknesses that change absolute grey values of the sediment components. Thus, we use an approach based on band ratios, also known as indices. By using band ratios, the grey values measured in different bands are normalized against each other and illumination variations (e.g. thickness variations) are eliminated. By combining specific ratios we are able to detect all seven major components in the investigated sediments (carbonates, diatoms, fine clastic material, plant rests, pyrite, quartz and resin). Then, the classification results (compositional maps) are validated. Although the automatic classification and the analogous classification show high concordances, some systematic errors could be identified. For example, the transition zone between the sediment and resin filled cracks is classified as fine clastic material and very coarse carbonates are partly classified as quartz because coarse carbonates can be very bright and spectra are partly saturated (grey value 255). With reduced illumination intensity "carbonate image pixels" get unsaturated and can be well distinguished from quartz grains. During the evaluation process we identify all falsely classified areas using neighbourhood matrices and reclassify them. Finally, we use filter techniques to derive downcore component frequencies from the classified thin section images for variable thick virtual samples. The filter conducts neighbourhood analyses. After filtering, each pixel of the filtered images carries the information about the frequency of any given component in a defined neighbourhood around (virtual sampling). References Rein, B. (2003) In-situ Reflektionsspektroskopie und digitale Bildanalyse Gewinnung hochauflösender Paläoumweltdaten mit fernerkundlichen Methoden, Habilitation Thesis, Univ. Mainz, 104 p. Jäger, K. and Rein, B. (2005): Identifying varve components using digital image analysis techniques. - in: Heidi Haas, Karl Ramseyer & Fritz Schlunegger (eds.): Sediment 2005 (18th-20th July 2005), Schriftenreihe der Deutschen Gesellschaft für Geowissenschaften, 38, p. 81 Rein, B. and Jäger, K. (subm.) COMPONENTS - Sediment component detection in thin sections by multispectral digital image analysis. Sedimentology.

  12. AVIRIS study of Death Valley evaporite deposits using least-squares band-fitting methods

    NASA Technical Reports Server (NTRS)

    Crowley, J. K.; Clark, R. N.

    1992-01-01

    Minerals found in playa evaporite deposits reflect the chemically diverse origins of ground waters in arid regions. Recently, it was discovered that many playa minerals exhibit diagnostic visible and near-infrared (0.4-2.5 micron) absorption bands that provide a remote sensing basis for observing important compositional details of desert ground water systems. The study of such systems is relevant to understanding solute acquisition, transport, and fractionation processes that are active in the subsurface. Observations of playa evaporites may also be useful for monitoring the hydrologic response of desert basins to changing climatic conditions on regional and global scales. Ongoing work using Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data to map evaporite minerals in the Death Valley salt pan is described. The AVIRIS data point to differences in inflow water chemistry in different parts of the Death Valley playa system and have led to the discovery of at least two new North American mineral occurrences. Seven segments of AVIRIS data were acquired over Death Valley on 31 July 1990, and were calibrated to reflectance by using the spectrum of a uniform area of alluvium near the salt pan. The calibrated data were subsequently analyzed by using least-squares spectral band-fitting methods, first described by Clark and others. In the band-fitting procedure, AVIRIS spectra are fit compared over selected wavelength intervals to a series of library reference spectra. Output images showing the degree of fit, band depth, and fit times the band depth are generated for each reference spectrum. The reference spectra used in the study included laboratory data for 35 pure evaporite spectra extracted from the AVIRIS image cube. Additional details of the band-fitting technique are provided by Clark and others elsewhere in this volume.

  13. Digital techniques for processing Landsat imagery

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1978-01-01

    An overview of the basic techniques used to process Landsat images with a digital computer, and the VICAR image processing software developed at JPL and available to users through the NASA sponsored COSMIC computer program distribution center is presented. Examples of subjective processing performed to improve the information display for the human observer, such as contrast enhancement, pseudocolor display and band rationing, and of quantitative processing using mathematical models, such as classification based on multispectral signatures of different areas within a given scene and geometric transformation of imagery into standard mapping projections are given. Examples are illustrated by Landsat scenes of the Andes mountains and Altyn-Tagh fault zone in China before and after contrast enhancement and classification of land use in Portland, Oregon. The VICAR image processing software system which consists of a language translator that simplifies execution of image processing programs and provides a general purpose format so that imagery from a variety of sources can be processed by the same basic set of general applications programs is described.

  14. Fast isotropic banding-free bSSFP imaging using 3D dynamically phase-cycled radial bSSFP (3D DYPR-SSFP).

    PubMed

    Benkert, Thomas; Ehses, Philipp; Blaimer, Martin; Jakob, Peter M; Breuer, Felix A

    2016-03-01

    Dynamically phase-cycled radial balanced steady-state free precession (DYPR-SSFP) is a method for efficient banding artifact removal in bSSFP imaging. Based on a varying radiofrequency (RF) phase-increment in combination with a radial trajectory, DYPR-SSFP allows obtaining a banding-free image out of a single acquired k-space. The purpose of this work is to present an extension of this technique, enabling fast three-dimensional isotropic banding-free bSSFP imaging. While banding artifact removal with DYPR-SSFP relies on the applied dynamic phase-cycle, this aspect can lead to artifacts, at least when the number of acquired projections lies below a certain limit. However, by using a 3D radial trajectory with quasi-random view ordering for image acquisition, this problem is intrinsically solved, enabling 3D DYPR-SSFP imaging at or even below the Nyquist criterion. The approach is validated for brain and knee imaging at 3 Tesla. Volumetric, banding-free images were obtained in clinically acceptable scan times with an isotropic resolution up to 0.56mm. The combination of DYPR-SSFP with a 3D radial trajectory allows banding-free isotropic volumetric bSSFP imaging with no expense of scan time. Therefore, this is a promising candidate for clinical applications such as imaging of cranial nerves or articular cartilage. Copyright © 2015. Published by Elsevier GmbH.

  15. Enhancement of time images for photointerpretation

    NASA Technical Reports Server (NTRS)

    Gillespie, A. R.

    1986-01-01

    The Thermal Infrared Multispectral Scanner (TIMS) images consist of six channels of data acquired in bands between 8 and 12 microns, thus they contain information about both temperature and emittance. Scene temperatures are controlled by reflectivity of the surface, but also by its geometry with respect to the Sun, time of day, and other factors unrelated to composition. Emittance is dependent upon composition alone. Thus the photointerpreter may wish to enhance emittance information selectively. Because thermal emittances in real scenes vary but little, image data tend to be highly correlated along channels. Special image processing is required to make this information available for the photointerpreter. Processing includes noise removal, construction of model emittance images, and construction of false-color pictures enhanced by decorrelation techniques.

  16. Mapping Mangrove Density from Rapideye Data in Central America

    NASA Astrophysics Data System (ADS)

    Son, Nguyen-Thanh; Chen, Chi-Farn; Chen, Cheng-Ru

    2017-06-01

    Mangrove forests provide a wide range of socioeconomic and ecological services for coastal communities. Extensive aquaculture development of mangrove waters in many developing countries has constantly ignored services of mangrove ecosystems, leading to unintended environmental consequences. Monitoring the current status and distribution of mangrove forests is deemed important for evaluating forest management strategies. This study aims to delineate the density distribution of mangrove forests in the Gulf of Fonseca, Central America with Rapideye data using the support vector machines (SVM). The data collected in 2012 for density classification of mangrove forests were processed based on four different band combination schemes: scheme-1 (bands 1-3, 5 excluding the red-edge band 4), scheme-2 (bands 1-5), scheme-3 (bands 1-3, 5 incorporating with the normalized difference vegetation index, NDVI), and scheme-4 (bands 1-3, 5 incorporating with the normalized difference red-edge index, NDRI). We also hypothesized if the obvious contribution of Rapideye red-edge band could improve the classification results. Three main steps of data processing were employed: (1), data pre-processing, (2) image classification, and (3) accuracy assessment to evaluate the contribution of red-edge band in terms of the accuracy of classification results across these four schemes. The classification maps compared with the ground reference data indicated the slightly higher accuracy level observed for schemes 2 and 4. The overall accuracies and Kappa coefficients were 97% and 0.95 for scheme-2 and 96.9% and 0.95 for scheme-4, respectively.

  17. Space Radar Image of Kiluchevskoi, Volcano, Russia

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is an image of the area of Kliuchevskoi volcano, Kamchatka, Russia, which began to erupt on September 30, 1994. Kliuchevskoi is the blue triangular peak in the center of the image, towards the left edge of the bright red area that delineates bare snow cover. The image was acquired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the space shuttle Endeavour on its 88th orbit on October 5, 1994. The image shows an area approximately 75 kilometers by 100 kilometers (46 miles by 62 miles) that is centered at 56.07 degrees north latitude and 160.84 degrees east longitude. North is toward the bottom of the image. The radar illumination is from the top of the image. The Kamchatka volcanoes are among the most active volcanoes in the world. The volcanic zone sits above a tectonic plate boundary, where the Pacific plate is sinking beneath the northeast edge of the Eurasian plate. The Endeavour crew obtained dramatic video and photographic images of this region during the eruption, which will assist scientists in analyzing the dynamics of the recent activity. The colors in this image were obtained using the following radar channels: red represents the L-band (horizontally transmitted and received); green represents the L-band (horizontally transmitted and vertically received); blue represents the C-band (horizontally transmitted and vertically received). In addition to Kliuchevskoi, two other active volcanoes are visible in the image. Bezymianny, the circular crater above and to the right of Kliuchevskoi, contains a slowly growing lava dome. Tolbachik is the large volcano with a dark summit crater near the upper right edge of the red snow covered area. The Kamchatka River runs from right to left across the bottom of the image. The current eruption of Kliuchevskoi included massive ejections of gas, vapor and ash, which reached altitudes of 15,000 meters (50,000 feet). Melting snow mixed with volcanic ash triggered mud flows on the flanks of the volcano. Paths of these flows can be seen as thin lines in various shades of blue and green on the north flank in the center of the image. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.v.(DLR), the major partner in science, operations and data processing of X-SAR.

  18. A preliminary evaluation of LANDSAT-4 thematic mapper data for their geometric and radiometric accuracies

    NASA Technical Reports Server (NTRS)

    Podwysocki, M. H.; Bender, L. U.; Falcone, N.; Jones, O. D.

    1983-01-01

    Some LANDSAT thematic mapper data collected over the eastern United States were analyzed for their whole scene geometric accuracy, band to band registration and radiometric accuracy. Band ratio images were created for a part of one scene in order to assess the capability of mapping geologic units with contrasting spectral properties. Systematic errors were found in the geometric accuracy of whole scenes, part of which were attributable to the film writing device used to record the images to film. Band to band registration showed that bands 1 through 4 were registered to within one pixel. Likewise, bands 5 and 7 also were registered to within one pixel. However, bands 5 and 7 were misregistered with bands 1 through 4 by 1 to 2 pixels. Band 6 was misregistered by 4 pixels to bands 1 through 4. Radiometric analysis indicated two kinds of banding, a modulo-16 stripping and an alternate light dark group of 16 scanlines. A color ratio composite image consisting of TM band ratios 3/4, 5/2, and 5/7 showed limonitic clay rich soils, limonitic clay poor soils, and nonlimonitic materials as distinctly different colors on the image.

  19. 3.3 and 11.3 micron images of HD 44179 - Evidence for an optically thick polycyclic aromatic hydrocarbon disk

    NASA Technical Reports Server (NTRS)

    Bregman, Jesse D.; Rank, David; Temi, Pasquale; Hudgins, Doug; Kay, Laura

    1993-01-01

    Images of HD 44179 (the Red Rectangle) obtained in the 3.3 and 11.3 micron emission bands show two different spatial distributions. The 3.3 micron band image is centrally peaked and slightly extended N-S while the 11.3 micron image shows a N-S bipolar shape with no central peak. If the 3.3 micron band image shows the intrinsic emission of the 11.3 micron band, then the data suggest absorption of the 11.3 micron emission near the center of HD 44179 by a disk with an optical depth of about one, making HD 44179 the first object in which the IR emission bands have been observed to be optically thick. Since there is no evidence of absorption of the 3.3 micron emission band by the disk, the absorption cross section of the 3.3 micron band must be substantially less than for the 11.3 micron band. Since the 3.3 and 11.3 micron bands are thought to arise from different size PAHs, the similar N-S extents of the two images implies that the ratio of small to large PAHs does not change substantially with distance from the center.

  20. Convolution kernels for multi-wavelength imaging

    NASA Astrophysics Data System (ADS)

    Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.

    2016-12-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher

  1. Recent CCD Images of Hubble's Variable Nebula (NGC 2261)

    NASA Astrophysics Data System (ADS)

    Meisel, D.; Dykstra, W.; Schulitz, F.

    1992-05-01

    Four CCD exposures of Hubble's Variable Nebula were taken with the RIT Kodak KF-4200 array using the 0.6m Hawaii telescope at Mauna Kea. The field of view was 5' x 7' of arc with a resolution of 0.3" per pixel. The effective wavelengths were v 5300A, r 6200A, i 8000A and ii 9000A. Preliminary image processing has been done on Macintosh IIfxs and LCs using NIH Image1.44b20 and has revealed considerable detail in the dust cloud, but no obvious obscuration features as seen at past epochs. Final image processing is continuing on Sun workstations using IRAF and SAO Image. Differences of structure between the wavelength bands and comparisons between images at other epochs, CO maps, and polarimetry will be discussed.

  2. Modelling high arctic percent vegetation cover using field digital images and high resolution satellite data

    NASA Astrophysics Data System (ADS)

    Liu, Nanfeng; Treitz, Paul

    2016-10-01

    In this study, digital images collected at a study site in the Canadian High Arctic were processed and classified to examine the spatial-temporal patterns of percent vegetation cover (PVC). To obtain the PVC of different plant functional groups (i.e., forbs, graminoids/sedges and mosses), field near infrared-green-blue (NGB) digital images were classified using an object-based image analysis (OBIA) approach. The PVC analyses comparing different vegetation types confirmed: (i) the polar semi-desert exhibited the lowest PVC with a large proportion of bare soil/rock cover; (ii) the mesic tundra cover consisted of approximately 60% mosses; and (iii) the wet sedge consisted almost exclusively of graminoids and sedges. As expected, the PVC and green normalized difference vegetation index (GNDVI; (RNIR - RGreen)/(RNIR + RGreen)), derived from field NGB digital images, increased during the summer growing season for each vegetation type: i.e., ∼5% (0.01) for polar semi-desert; ∼10% (0.04) for mesic tundra; and ∼12% (0.03) for wet sedge respectively. PVC derived from field images was found to be strongly correlated with WorldView-2 derived normalized difference spectral indices (NDSI; (Rx - Ry)/(Rx + Ry)), where Rx is the reflectance of the red edge (724.1 nm) or near infrared (832.9 nm and 949.3 nm) bands; Ry is the reflectance of the yellow (607.7 nm) or red (658.8 nm) bands with R2's ranging from 0.74 to 0.81. NDSIs that incorporated the yellow band (607.7 nm) performed slightly better than the NDSIs without, indicating that this band may be more useful for investigating Arctic vegetation that often includes large proportions of senescent vegetation throughout the growing season.

  3. Reconfigurable mask for adaptive coded aperture imaging (ACAI) based on an addressable MOEMS microshutter array

    NASA Astrophysics Data System (ADS)

    McNie, Mark E.; Combes, David J.; Smith, Gilbert W.; Price, Nicola; Ridley, Kevin D.; Brunson, Kevin M.; Lewis, Keith L.; Slinger, Chris W.; Rogers, Stanley

    2007-09-01

    Coded aperture imaging has been used for astronomical applications for several years. Typical implementations use a fixed mask pattern and are designed to operate in the X-Ray or gamma ray bands. More recent applications have emerged in the visible and infra red bands for low cost lens-less imaging systems. System studies have shown that considerable advantages in image resolution may accrue from the use of multiple different images of the same scene - requiring a reconfigurable mask. We report on work to develop a novel, reconfigurable mask based on micro-opto-electro-mechanical systems (MOEMS) technology employing interference effects to modulate incident light in the mid-IR band (3-5μm). This is achieved by tuning a large array of asymmetric Fabry-Perot cavities by applying an electrostatic force to adjust the gap between a moveable upper polysilicon mirror plate supported on suspensions and underlying fixed (electrode) layers on a silicon substrate. A key advantage of the modulator technology developed is that it is transmissive and high speed (e.g. 100kHz) - allowing simpler imaging system configurations. It is also realised using a modified standard polysilicon surface micromachining process (i.e. MUMPS-like) that is widely available and hence should have a low production cost in volume. We have developed designs capable of operating across the entire mid-IR band with peak transmissions approaching 100% and high contrast. By using a pixelated array of small mirrors, a large area device comprising individually addressable elements may be realised that allows reconfiguring of the whole mask at speeds in excess of video frame rates.

  4. Data Processing of LAPAN-A3 Thermal Imager

    NASA Astrophysics Data System (ADS)

    Hartono, R.; Hakim, P. R.; Syafrudin, AH

    2018-04-01

    As an experimental microsatellite, LAPAN-A3/IPB satellite has an experimental thermal imager, which is called as micro-bolometer, to observe earth surface temperature for horizon observation. The imager data is transmitted from satellite to ground station by S-band video analog signal transmission, and then processed by ground station to become sequence of 8-bit enhanced and contrasted images. Data processing of LAPAN-A3/IPB thermal imager is more difficult than visual digital camera, especially for mosaic and classification purpose. This research aims to describe simple mosaic and classification process of LAPAN-A3/IPB thermal imager based on several videos data produced by the imager. The results show that stitching using Adobe Photoshop produces excellent result but can only process small area, while manual approach using ImageJ software can produce a good result but need a lot of works and time consuming. The mosaic process using image cross-correlation by Matlab offers alternative solution, which can process significantly bigger area in significantly shorter time processing. However, the quality produced is not as good as mosaic images of the other two methods. The simple classifying process that has been done shows that the thermal image can classify three distinct objects, i.e.: clouds, sea, and land surface. However, the algorithm fail to classify any other object which might be caused by distortions in the images. All of these results can be used as reference for development of thermal imager in LAPAN-A4 satellite.

  5. Imaging Study of Multi-Crystalline Silicon Wafers Throughout the Manufacturing Process: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, S.; Yan, F.; Zaunbracher, K.

    2011-07-01

    Imaging techniques are applied to multi-crystalline silicon bricks, wafers at various process steps, and finished solar cells. Photoluminescence (PL) imaging is used to characterize defects and material quality on bricks and wafers. Defect regions within the wafers are influenced by brick position within an ingot and height within the brick. The defect areas in as-cut wafers are compared to imaging results from reverse-bias electroluminescence and dark lock-in thermography and cell parameters of near-neighbor finished cells. Defect areas are also characterized by defect band emissions. The defect areas measured by these techniques on as-cut wafers are shown to correlate to finishedmore » cell performance.« less

  6. Effects of spatial frequency bands on perceptual decision: it is not the stimuli but the comparison.

    PubMed

    Rotshtein, Pia; Schofield, Andrew; Funes, María J; Humphreys, Glyn W

    2010-08-24

    Observers performed three between- and two within-category perceptual decisions with hybrid stimuli comprising low and high spatial frequency (SF) images. We manipulated (a) attention to, and (b) congruency of information in the two SF bands. Processing difficulty of the different SF bands varied across different categorization tasks: house-flower, face-house, and valence decisions were easier when based on high SF bands, while flower-face and gender categorizations were easier when based on low SF bands. Larger interference also arose from response relevant distracters that were presented in the "preferred" SF range of the task. Low SF effects were facilitated by short exposure durations. The results demonstrate that decisions are affected by an interaction of task and SF range and that the information from the non-attended SF range interfered at the decision level. A further analysis revealed that overall differences in the statistics of image features, in particular differences of orientation information between two categories, were associated with decision difficulty. We concluded that the advantage of using information from one SF range over another depends on the specific task requirements that built on the differences of the statistical properties between the compared categories.

  7. Wide Field Collimator 2 (WFC2) for GOES Imager and Sounder

    NASA Technical Reports Server (NTRS)

    Etemad, Shahriar; Bremer, James C.; Zukowski, Barbara J.; Pasquale, Bert A.; zukowski, Tmitri J.; Prince, Robert E.; O'Neill, Patrick A.; Ross, Robert W.

    2004-01-01

    Two of the GOES instruments, the Imager and the Sounder, perform scans of the Earth to provide a full disc picture of the Earth. To verify the entire scan process, an image of a target that covers an 18 deg. circular field-of-view is collimated and projected into the field of regard of each instrument. The Wide Field Collimator 2 (WFC2) has many advantages over its predecessor, WFC1, including lower thermal dissipation higher fir field MTF, smaller package, and a more intuitive (faster) focusing process. The illumination source is an LED array that emits in a narrow spectral band centered at 689 nm, within the visible spectral bands of the Imager and Sounder. The illumination level can be continuously adjusted electronically. Lower thermal dissipation eliminates the need for forced convection cooling and minimizes time to reach thermal stability. The lens system has been optimized for the illumination source spectral output and athernalized to remain in focus during bulk temperature changes within the laboratory environment. The MTF of the lens is higher than that of the WFC1 at the edge of FOV. The target is focused in three orthogonal motions, controlled by an ergonomic system that saves substantial time and produces a sharper focus. Key words: Collimator, GOES, Imager, Sounder, Projector

  8. Comparison of spatial variability in visible and near-infrared spectral images

    USGS Publications Warehouse

    Chavez, P.S.

    1992-01-01

    The visible and near-infrared bands of the Landsat Thematic Mapper (TM) and the Satellite Pour l'Observation de la Terre (SPOT) were analyzed to determine which band contained more spatial variability. It is important for applications that require spatial information, such as those dealing with mapping linear features and automatic image-to-image correlation, to know which spectral band image should be used. Statistical and visual analyses were used in the project. The amount of variance in an 11 by 11 pixel spatial filter and in the first difference at the six spacings of 1, 5, 11, 23, 47, and 95 pixels was computed for the visible and near-infrared bands. The results indicate that the near-infrared band has more spatial variability than the visible band, especially in images covering densely vegetated areas. -Author

  9. SP mountain data analysis

    NASA Technical Reports Server (NTRS)

    Rawson, R. F.; Hamilton, R. E.; Liskow, C. L.; Dias, A. R.; Jackson, P. L.

    1981-01-01

    An analysis of synthetic aperture radar data of SP Mountain was undertaken to demonstrate the use of digital image processing techniques to aid in geologic interpretation of SAR data. These data were collected with the ERIM X- and L-band airborne SAR using like- and cross-polarizations. The resulting signal films were used to produce computer compatible tapes, from which four-channel imagery was generated. Slant range-to-ground range and range-azimuth-scale corrections were made in order to facilitate image registration; intensity corrections were also made. Manual interpretation of the imagery showed that L-band represented the geology of the area better than X-band. Several differences between the various images were also noted. Further digital analysis of the corrected data was done for enhancement purposes. This analysis included application of an MSS differencing routine and development of a routine for removal of relief displacement. It was found that accurate registration of the SAR channels is critical to the effectiveness of the differencing routine. Use of the relief displacement algorithm on the SP Mountain data demonstrated the feasibility of the technique.

  10. Sharpening advanced land imager multispectral data using a sensor model

    USGS Publications Warehouse

    Lemeshewsky, G.P.; ,

    2005-01-01

    The Advanced Land Imager (ALI) instrument on NASA's Earth Observing One (EO-1) satellite provides for nine spectral bands at 30m ground sample distance (GSD) and a 10m GSD panchromatic band. This report describes an image sharpening technique where the higher spatial resolution information of the panchromatic band is used to increase the spatial resolution of ALI multispectral (MS) data. To preserve the spectral characteristics, this technique combines reported deconvolution deblurring methods for the MS data with highpass filter-based fusion methods for the Pan data. The deblurring process uses the point spread function (PSF) model of the ALI sensor. Information includes calculation of the PSF from pre-launch calibration data. Performance was evaluated using simulated ALI MS data generated by degrading the spatial resolution of high resolution IKONOS satellite MS data. A quantitative measure of performance was the error between sharpened MS data and high resolution reference. This report also compares performance with that of a reported method that includes PSF information. Preliminary results indicate improved sharpening with the method reported here.

  11. Classification and Recognition of Tomb Information in Hyperspectral Image

    NASA Astrophysics Data System (ADS)

    Gu, M.; Lyu, S.; Hou, M.; Ma, S.; Gao, Z.; Bai, S.; Zhou, P.

    2018-04-01

    There are a large number of materials with important historical information in ancient tombs. However, in many cases, these substances could become obscure and indistinguishable by human naked eye or true colour camera. In order to classify and identify materials in ancient tomb effectively, this paper applied hyperspectral imaging technology to archaeological research of ancient tomb in Shanxi province. Firstly, the feature bands including the main information at the bottom of the ancient tomb are selected by the Principal Component Analysis (PCA) transformation to realize the data dimension. Then, the image classification was performed using Support Vector Machine (SVM) based on feature bands. Finally, the material at the bottom of ancient tomb is identified by spectral analysis and spectral matching. The results show that SVM based on feature bands can not only ensure the classification accuracy, but also shorten the data processing time and improve the classification efficiency. In the material identification, it is found that the same matter identified in the visible light is actually two different substances. This research result provides a new reference and research idea for archaeological work.

  12. Space Radar Image of Oil Slicks

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a radar image of an offshore drilling field about 150 km (93 miles) west of Bombay, India, in the Arabian Sea. The dark streaks are extensive oil slicks surrounding many of the drilling platforms, which appear as bright white spots. Radar images are useful for detecting and measuring the extent of oil seepages on the ocean surface, from both natural and industrial sources. The long, thin streaks extending from many of the platforms are spreading across the sea surface, pushed by local winds. The larger dark patches are dispersed slicks that were likely discharged earlier than the longer streaks, when the winds were probably from a different direction. The dispersed oil will eventually spread out over the more dense water and become a layer which is a single molecule thick. Many forms of oil, both from biological and from petroleum sources, smooth out the ocean surface, causing the area to appear dark in radar images. There are also two forms of ocean waves shown in this image. The dominant group of large waves (upper center) are called internal waves. These waves are formed below the ocean surface at the boundary between layers of warm and cold water and they appear in the radar image because of the way they change the ocean surface. Ocean swells, which are waves generated by winds, are shown throughout the image but are most distinct in the blue area adjacent to the internal waves. Identification of waves provide oceanographers with information about the smaller scale dynamic processes of the ocean. This image was acquired by the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the space shuttle Endeavour on October 9, 1994. The colors are assigned to different frequencies and polarizations of the radar as follows: Red is L-band vertically transmitted, vertically received; green is the average of L-band vertically transmitted, vertically received and C-band vertically transmitted, vertically received; blue is C-band vertically transmitted, vertically received. The image is located at 19.25 degrees north latitude and 71.34 degrees east longitude and covers an area 20 km by 45 km (12.4 miles by 27.9 miles). SIR-C/X-SAR, a joint mission of the German, Italian and United States space agencies, is part of NASA's Mission to Planet Earth.

  13. Space Radar Image of Taal Volcano, Philippines

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is an image of Taal volcano, near Manila on the island of Luzon in the Philippines. The black area in the center is Taal Lake, which nearly fills the 30-kilometer-diameter (18-mile) caldera. The caldera rim consists of deeply eroded hills and cliffs. The large island in Taal Lake, which itself contains a crater lake, is known as Volcano Island. The bright yellow patch on the southwest side of the island marks the site of an explosion crater that formed during a deadly eruption of Taal in 1965. The image was acquired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the space shuttle Endeavour on its 78th orbit on October 5, 1994. The image shows an area approximately 56 kilometers by 112 kilometers (34 miles by 68 miles) that is centered at 14.0 degrees north latitude and 121.0 degrees east longitude. North is toward the upper right of the image. The colors in this image were obtained using the following radar channels: red represents the L-band (horizontally transmitted and received); green represents the L-band (horizontally transmitted and vertically received); blue represents the C-band (horizontally transmitted and vertically received). Since 1572, Taal has erupted at least 34 times. Since early 1991, the volcano has been restless, with swarms of earthquakes, new steaming areas, ground fracturing, and increases in water temperature of the lake. Volcanologists and other local authorities are carefully monitoring Taal to understand if the current activity may foretell an eruption. Taal is one of 15 'Decade Volcanoes' that have been identified by the volcanology community as presenting large potential hazards to population centers. The bright area in the upper right of the image is the densely populated city of Manila, only 50 kilometers (30 miles) north of the central crater. Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.v.(DLR), the major partner in science, operations and data processing of X-SAR.

  14. Sensor feature fusion for detecting buried objects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, G.A.; Sengupta, S.K.; Sherwood, R.J.

    1993-04-01

    Given multiple registered images of the earth`s surface from dual-band sensors, our system fuses information from the sensors to reduce the effects of clutter and improve the ability to detect buried or surface target sites. The sensor suite currently includes two sensors (5 micron and 10 micron wavelengths) and one ground penetrating radar (GPR) of the wide-band pulsed synthetic aperture type. We use a supervised teaming pattern recognition approach to detect metal and plastic land mines buried in soil. The overall process consists of four main parts: Preprocessing, feature extraction, feature selection, and classification. These parts are used in amore » two step process to classify a subimage. Thee first step, referred to as feature selection, determines the features of sub-images which result in the greatest separability among the classes. The second step, image labeling, uses the selected features and the decisions from a pattern classifier to label the regions in the image which are likely to correspond to buried mines. We extract features from the images, and use feature selection algorithms to select only the most important features according to their contribution to correct detections. This allows us to save computational complexity and determine which of the sensors add value to the detection system. The most important features from the various sensors are fused using supervised teaming pattern classifiers (including neural networks). We present results of experiments to detect buried land mines from real data, and evaluate the usefulness of fusing feature information from multiple sensor types, including dual-band infrared and ground penetrating radar. The novelty of the work lies mostly in the combination of the algorithms and their application to the very important and currently unsolved operational problem of detecting buried land mines from an airborne standoff platform.« less

  15. Land mine detection using multispectral image fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, G.A.; Sengupta, S.K.; Aimonetti, W.D.

    1995-03-29

    Our system fuses information contained in registered images from multiple sensors to reduce the effects of clutter and improve the ability to detect surface and buried land mines. The sensor suite currently consists of a camera that acquires images in six bands (400nm, 500nm, 600nm, 700nm, 800nm and 900nm). Past research has shown that it is extremely difficult to distinguish land mines from background clutter in images obtained from a single sensor. It is hypothesized, however, that information fused from a suite of various sensors is likely to provide better detection reliability, because the suite of sensors detects a varietymore » of physical properties that are more separable in feature space. The materials surrounding the mines can include natural materials (soil, rocks, foliage, water, etc.) and some artifacts. We use a supervised learning pattern recognition approach to detecting the metal and plastic land mines. The overall process consists of four main parts: Preprocessing, feature extraction, feature selection, and classification. These parts are used in a two step process to classify a subimage. We extract features from the images, and use feature selection algorithms to select only the most important features according to their contribution to correct detections. This allows us to save computational complexity and determine which of the spectral bands add value to the detection system. The most important features from the various sensors are fused using a supervised learning pattern classifier (the probabilistic neural network). We present results of experiments to detect land mines from real data collected from an airborne platform, and evaluate the usefulness of fusing feature information from multiple spectral bands.« less

  16. Processing challenges in the XMM-Newton slew survey

    NASA Astrophysics Data System (ADS)

    Saxton, Richard D.; Altieri, Bruno; Read, Andrew M.; Freyberg, Michael J.; Esquej, M. P.; Bermejo, Diego

    2005-08-01

    The great collecting area of the mirrors coupled with the high quantum efficiency of the EPIC detectors have made XMM-Newton the most sensitive X-ray observatory flown to date. This is particularly evident during slew exposures which, while giving only 15 seconds of on-source time, actually constitute a 2-10 keV survey ten times deeper than current "all-sky" catalogues. Here we report on progress towards making a catalogue of slew detections constructed from the full, 0.2-12 keV energy band and discuss the challenges associated with processing the slew data. The fast (90 degrees per hour) slew speed results in images which are smeared, by different amounts depending on the readout mode, effectively changing the form of the point spread function. The extremely low background in slew images changes the optimum source searching criteria such that searching a single image using the full energy band is seen to be more sensitive than splitting the data into discrete energy bands. False detections due to optical loading by bright stars, the wings of the PSF in very bright sources and single-frame detector flashes are considered and techniques for identifying and removing these spurious sources from the final catalogue are outlined. Finally, the attitude reconstruction of the satellite during the slewing maneuver is complex. We discuss the implications of this on the positional accuracy of the catalogue.

  17. Daytime Mud Detection for Unmanned Ground Vehicle Autonomous Navigation

    DTIC Science & Technology

    2008-12-01

    disambiguate shadows from wet soil than shadows from dry soil. (a) Red band (b) NIR band (c) NDVI image (d) Brightness image wet soil Red...spectral bands to segment wet soil. Red and NIR bands (Figures 5a and 5b) can be used to generate a Normalized Difference Vegetation Index ( NDVI ...along the soil line image (Figure 5f) can be generated. The NDVI and normal distance to the soil line images can be used to segment soil from

  18. VizieR Online Data Catalog: Galactic CHaMP. II. Dense gas clumps. (Ma+, 2013)

    NASA Astrophysics Data System (ADS)

    Ma, B.; Tan, J. C.; Barnes, P. J.

    2015-04-01

    A total of 303 dense gas clumps have been detected using the HCO+(1-0) line in the CHaMP survey (Paper I, Barnes et al. 2011, J/ApJS/196/12). In this article we have derived the SED for these clumps using Spitzer, MSX, and IRAS data. The Midcourse Space Experiment (MSX) was launched in 1996 April. It conducted a Galactic plane survey (0

  19. A near-infrared imaging survey of interacting galaxies - The disk-disk merger candidates subset

    NASA Technical Reports Server (NTRS)

    Stanford, S. A.; Bushouse, H. A.

    1991-01-01

    Near-infrared imaging obtained for systems believed to be advanced disk-disk mergers are presented and discussed. These systems were chosen from a sample of approximately 170 objects from the Arp Atlas of Peculiar Galaxies which have been imaged in the JHK bands as part of an investigation into the stellar component of interacting galaxies. Of the eight remnants which show optical signs of a disk-disk merger, the near-infrared surface brightness profiles are well-fitted by an r exp 1/4 law over all measured radii in four systems, and out to radii of about 3 kpc in three systems. These K band profiles indicate that most of the remnants in the sample either have finished or are in the process of relaxing into a mass distribution like that of normal elliptical galaxies.

  20. Novel instrumentation of multispectral imaging technology for detecting tissue abnormity

    NASA Astrophysics Data System (ADS)

    Yi, Dingrong; Kong, Linghua

    2012-10-01

    Multispectral imaging is becoming a powerful tool in a wide range of biological and clinical studies by adding spectral, spatial and temporal dimensions to visualize tissue abnormity and the underlying biological processes. A conventional spectral imaging system includes two physically separated major components: a band-passing selection device (such as liquid crystal tunable filter and diffraction grating) and a scientific-grade monochromatic camera, and is expensive and bulky. Recently micro-arrayed narrow-band optical mosaic filter was invented and successfully fabricated to reduce the size and cost of multispectral imaging devices in order to meet the clinical requirement for medical diagnostic imaging applications. However the challenging issue of how to integrate and place the micro filter mosaic chip to the targeting focal plane, i.e., the imaging sensor, of an off-shelf CMOS/CCD camera is not reported anywhere. This paper presents the methods and results of integrating such a miniaturized filter with off-shelf CMOS imaging sensors to produce handheld real-time multispectral imaging devices for the application of early stage pressure ulcer (ESPU) detection. Unlike conventional multispectral imaging devices which are bulky and expensive, the resulting handheld real-time multispectral ESPU detector can produce multiple images at different center wavelengths with a single shot, therefore eliminates the image registration procedure required by traditional multispectral imaging technologies.

  1. Tissues segmentation based on multi spectral medical images

    NASA Astrophysics Data System (ADS)

    Li, Ya; Wang, Ying

    2017-11-01

    Each band image contains the most obvious tissue feature according to the optical characteristics of different tissues in different specific bands for multispectral medical images. In this paper, the tissues were segmented by their spectral information at each multispectral medical images. Four Local Binary Patter descriptors were constructed to extract blood vessels based on the gray difference between the blood vessels and their neighbors. The segmented tissue in each band image was merged to a clear image.

  2. Space Radar Image of Colima Volcano, Jalisco, Mexico

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is an image of the Colima volcano in Jalisco, Mexico, a vigorously active volcano that erupted as recently as July 1994. The eruption partially destroyed a lava dome at the summit and deposited a new layer of ash on the volcano's southern slopes. Surrounding communities face a continuing threat of ash falls and volcanic mudflows from the volcano, which has been designated one of 15 high-risk volcanoes for scientific study during the next decade. This image was acquired by the Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the space shuttle Endeavour on its 24th orbit on October 1, 1994. The image is centered at 19.4 degrees north latitude, 103.7 degrees west longitude. The area shown is approximately 35.7 kilometers by 37.5 kilometers (22 miles by 23 miles). This single-frequency, multi-polarized SIR-C image shows: red as L-band horizontally transmitted and received; green as L-band horizontally transmitted and vertically received; and blue as the ratio of the two channels. The summit area appears orange and the recent deposits fill the valleys along the south and southwest slopes. Observations from space are helping scientists understand the behavior of dangerous volcanoes and will be used to mitigate the effects of future eruptions on surrounding populations. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: the L-band (24 cm), the C-band (6 cm) and the X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.V.(DLR), the major partner in science, operations and data processing of X-SAR.

  3. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    PubMed Central

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741

  4. Multispectral image compression based on DSC combined with CCSDS-IDC.

    PubMed

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

  5. Space Radar Image of Manaus region of Brazil

    NASA Technical Reports Server (NTRS)

    1994-01-01

    These L-band images of the Manaus region of Brazil were acquired by the Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the space shuttle Endeavour. The left image was acquired on April 12, 1994, and the middle image was acquired on October 3, 1994. The area shown is approximately 8 kilometers by 40 kilometers (5 miles by 25 miles). The two large rivers in this image, the Rio Negro (top) and the Rio Solimoes (bottom), combine at Manaus (west of the image) to form the Amazon River. The image is centered at about 3 degrees south latitude and 61 degrees west longitude. North is toward the top left of the images. The differences in brightness between the images reflect changes in the scattering of the radar channel. In this case, the changes are indicative of flooding. A flooded forest has a higher backscatter at L-band (horizontally transmitted and received) than an unflooded river. The extent of the flooding is much greater in the April image than in the October image, and corresponds to the annual, 10-meter (33-foot) rise and fall of the Amazon River. A third image at right shows the change in the April and October images and was created by determining which areas had significant decreases in the intensity of radar returns. These areas, which appear blue on the third image at right, show the dramatic decrease in the extent of flooded forest, as the level of the Amazon River falls. The flooded forest is a vital habitat for fish and floating meadows are an important source of atmospheric methane. This demonstrates the capability of SIR-C/X-SAR to study important environmental changes that are impossible to see with optical sensors over regions such as the Amazon, where frequent cloud cover and dense forest canopies obscure monitoring of floods. Field studies by boat, on foot and in low-flying aircraft by the University of California at Santa Barbara, in collaboration with Brazil's Instituto Nacional de Pesguisas Estaciais, during the first and second flights of the SIR-C/X-SAR system have validated the interpretation of the radar images. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.V.(DLR), the major partner in science, operations and data processing of X-SAR.

  6. Hyperspectral Image Analysis for Skin Tumor Detection

    NASA Astrophysics Data System (ADS)

    Kong, Seong G.; Park, Lae-Jeong

    This chapter presents hyperspectral imaging of fluorescence for nonin-vasive detection of tumorous tissue on mouse skin. Hyperspectral imaging sensors collect two-dimensional (2D) image data of an object in a number of narrow, adjacent spectral bands. This high-resolution measurement of spectral information reveals a continuous emission spectrum for each image pixel useful for skin tumor detection. The hyperspectral image data used in this study are fluorescence intensities of a mouse sample consisting of 21 spectral bands in the visible spectrum of wavelengths ranging from 440 to 640 nm. Fluorescence signals are measured using a laser excitation source with the center wavelength of 337 nm. An acousto-optic tunable filter is used to capture individual spectral band images at a 10-nm resolution. All spectral band images are spatially registered with the reference band image at 490 nm to obtain exact pixel correspondences by compensating the offsets caused during the image capture procedure. The support vector machines with polynomial kernel functions provide decision boundaries with a maximum separation margin to classify malignant tumor and normal tissue from the observed fluorescence spectral signatures for skin tumor detection.

  7. Fuzzy Markov random fields versus chains for multispectral image segmentation.

    PubMed

    Salzenstein, Fabien; Collet, Christophe

    2006-11-01

    This paper deals with a comparison of recent statistical models based on fuzzy Markov random fields and chains for multispectral image segmentation. The fuzzy scheme takes into account discrete and continuous classes which model the imprecision of the hidden data. In this framework, we assume the dependence between bands and we express the general model for the covariance matrix. A fuzzy Markov chain model is developed in an unsupervised way. This method is compared with the fuzzy Markovian field model previously proposed by one of the authors. The segmentation task is processed with Bayesian tools, such as the well-known MPM (Mode of Posterior Marginals) criterion. Our goal is to compare the robustness and rapidity for both methods (fuzzy Markov fields versus fuzzy Markov chains). Indeed, such fuzzy-based procedures seem to be a good answer, e.g., for astronomical observations when the patterns present diffuse structures. Moreover, these approaches allow us to process missing data in one or several spectral bands which correspond to specific situations in astronomy. To validate both models, we perform and compare the segmentation on synthetic images and raw multispectral astronomical data.

  8. Investigation of LANDSAT D Thematic Mapper geometric performance: Line to line and band to band registration. [Toulouse, France and Mississippi, U.S.A.

    NASA Technical Reports Server (NTRS)

    Begni, G.; BOISSIN; Desachy, M. J.; PERBOS

    1984-01-01

    The geometric accuray of LANDSAT TM raw data of Toulouse (France) raw data of Mississippi, and preprocessed data of Mississippi was examined using a CDC computer. Analog images were restituted on the VIZIR SEP device. The methods used for line to line and band to band registration are based on automatic correlation techniques and are widely used in automated image to image registration at CNES. Causes of intraband and interband misregistration are identified and statistics are given for both line to line and band to band misregistration.

  9. Apollo 9 multiband photography experiment S065

    NASA Technical Reports Server (NTRS)

    Schowengerdt, R. A.; Slater, P. N.

    1972-01-01

    Fourier analysis was applied to microdensitometer scans of a selected region of one SO65 frame in each of the three black-and-white bands. The approach was unique because a somewhat arbitrary section of the image was used and not limited to available targets as in edge analysis. Comparison of duplicates and calculation of absolute SO65 MTF were done by applying linear systems theory to the spatial spectra of the image scans. It was found that the duplication process was nonlinear and resulted in general amplification of spatial frequency modulation. However, the increase in modulation was offset by a corresponding increase in the granularity of the copies. The amount of increase seemed to be related to the initial granularity, but a direct relationship was not verified. Band-to-band comparison of image quality was achieved in the form of signal-to-noise ratio curves as a function of spatial frequency for each band. From this standpoint the DD band was an order of magnitude better than the other two. These were several factors that restricted the analysis of the Apollo 9 imagery. Among these were the lack of precise sensitometric and optical system data on the high altitude photography. In addition it was determined that there was only one simultaneous pair of high altitude and SO65 frames. Finally, the original SO65 photography was not available for scanning (for obvious reasons), thus eliminating a reference base for granularity and SO65 MTF determination.

  10. An adaptive band selection method for dimension reduction of hyper-spectral remote sensing image

    NASA Astrophysics Data System (ADS)

    Yu, Zhijie; Yu, Hui; Wang, Chen-sheng

    2014-11-01

    Hyper-spectral remote sensing data can be acquired by imaging the same area with multiple wavelengths, and it normally consists of hundreds of band-images. Hyper-spectral images can not only provide spatial information but also high resolution spectral information, and it has been widely used in environment monitoring, mineral investigation and military reconnaissance. However, because of the corresponding large data volume, it is very difficult to transmit and store Hyper-spectral images. Hyper-spectral image dimensional reduction technique is desired to resolve this problem. Because of the High relation and high redundancy of the hyper-spectral bands, it is very feasible that applying the dimensional reduction method to compress the data volume. This paper proposed a novel band selection-based dimension reduction method which can adaptively select the bands which contain more information and details. The proposed method is based on the principal component analysis (PCA), and then computes the index corresponding to every band. The indexes obtained are then ranked in order of magnitude from large to small. Based on the threshold, system can adaptively and reasonably select the bands. The proposed method can overcome the shortcomings induced by transform-based dimension reduction method and prevent the original spectral information from being lost. The performance of the proposed method has been validated by implementing several experiments. The experimental results show that the proposed algorithm can reduce the dimensions of hyper-spectral image with little information loss by adaptively selecting the band images.

  11. Space Radar Image of Safsaf, North Africa

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a false-color image of the uninhabited Safsaf Oasis in southern Egypt near the Egypt/Sudan border. It was produced from data obtained from the L-band and C-band radars that are part of the Spaceborne Imaging Radar C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) onboard space shuttle Endeavour on April 9, 1994. The image is centered at 22 degree north latitude, 29 degrees east longitude. It shows detailed structures of bedrock; the dark blue sinuous lines are braided channels that occupy part of an old broad river valley. On the ground and in optical photographs, this big valley and the channels in it are invisible because they are entirely covered by windblown sand. Some of these same channels were observed in SIR-A images in 1981. It is hypothesized that the large valley was carved by one of several ancient predecessor rivers that crossed this part of North Africa, flowing westward, tens of millions of years before the Nile River existed. The Nile flows north about 300 kilometers (200 miles) to the east. The small channels are younger, and probably formed during relatively wet climatic periods within the past few hundred thousand years. This image shows that the channels are in a river valley located in an area where U.S. Geological Survey geologists and archeologists discovered an unusual concentration of hand axes (stone tools) used by Early Man (Homo erectus) hundreds of thousands of years ago. The image clearly shows that in wetter times, the valley would have supported game animals and vegetation. Today, as a result of climate change, the area in uninhabited and lacks water except fora few scattered oases. This color composite image was produced from C-band and L-band horizontal polarization images. The C-band image was assigned red, the L-band (HH) polarization image is shown in green, and the ratio of these two images (LHH/CHH) appears in blue. The primary and composite colors on the image indicate the degree to which the C-band, H-band, their ratio --or some combination of all three -- respond to the roughness of the radar backscattering surface. Using this coloring scheme, areas that appear bright at both L-band and C-band are colored yellow, while areas that appear brighter at L-band than C-band appear more blue. Detailed analysis of this scene indicates that the separate C-band and L-band images used to produce this color composite have a very similar overall appearance. This suggests that the C-band and the L-band signals are both easily penetrating the thin 1- to 12-centimeter (0.5- to 5-inch) 'average' surface cover of loose windblown sand, and are commonly 'seeing' similar interfaces just below that cover. This radar interface may be at the scattered rocky outcrops on the ground surface, but is more likely to be the shallow underlying surfaces of river gravel or bedrock, which are generally covered by only a few inches of windblown sand. Virtually everything visible on this radar composite image cannot be seen, either when standing on the ground or when viewing photographs or satellite images such as the United States' Landsat or the French SPOT satellite. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI).

  12. Space Radar Image of Kilauea Volcano, Hawaii

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This three-dimensional image of the volcano Kilauea was generated based on interferometric fringes derived from two X-band Synthetic Aperture Radar data takes on April 13, 1994 and October 4, 1994. The altitude lines are based on quantitative interpolation of the topographic fringes. The level difference between neighboring altitude lines is 20 meters (66 feet). The ground area covers 12 kilometers by 4 kilometers (7.5 miles by 2.5 miles). The altitude difference in the image is about 500 meters (1,640 feet). The volcano is located around 19.58 degrees north latitude and 155.55 degrees west longitude. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.V.(DLR), the major partner in science, operations and data processing of X-SAR. The Instituto Ricerca Elettromagnetismo Componenti Elettronici (IRECE) at the University of Naples was a partner in the interferometry analysis.

  13. Detector-level spectral characterization of the Suomi National Polar-orbiting Partnership Visible Infrared Imaging Radiometer Suite long-wave infrared bands M15 and M16.

    PubMed

    Padula, Francis; Cao, Changyong

    2015-06-01

    The Suomi National Polar-orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) sensor data record (SDR) product achieved validated maturity status in March 2014 after roughly two years of on-orbit characterization (S-NPP spacecraft launched on 28 October 2011). During post-launch analysis the VIIRS Sea Surface Temperature (SST) Environmental Data Record (EDR) team observed an anomalous striping pattern in the daytime SST data. Daytime SST retrievals use the two VIIRS long-wave infrared bands: M15 (10.7 μm) and M16 (11.8 μm). To assess possible root causes due to detector-level spectral response function (SRF) effects, a study was conducted to compare the radiometric response of the detector-level and operational-band averaged SRFs of VIIRS bands M15 and M16. The study used simulated hyperspectral blackbody radiance data and clear-sky ocean hyperspectral radiances under different atmospheric conditions. It was concluded that the SST product is likely impacted by small differences in detector-level SRFs and that if users require optimal radiometric performance, detector-level processing is recommended for both SDR and EDR products. Future work should investigate potential SDR product improvements through detector-level processing in support of the generation of Suomi NPP VIIRS climate quality SDRs.

  14. Hyperspectral imaging for food processing automation

    NASA Astrophysics Data System (ADS)

    Park, Bosoon; Lawrence, Kurt C.; Windham, William R.; Smith, Doug P.; Feldner, Peggy W.

    2002-11-01

    This paper presents the research results that demonstrates hyperspectral imaging could be used effectively for detecting feces (from duodenum, ceca, and colon) and ingesta on the surface of poultry carcasses, and potential application for real-time, on-line processing of poultry for automatic safety inspection. The hyperspectral imaging system included a line scan camera with prism-grating-prism spectrograph, fiber optic line lighting, motorized lens control, and hyperspectral image processing software. Hyperspectral image processing algorithms, specifically band ratio of dual-wavelength (565/517) images and thresholding were effective on the identification of fecal and ingesta contamination of poultry carcasses. A multispectral imaging system including a common aperture camera with three optical trim filters (515.4 nm with 8.6- nm FWHM), 566.4 nm with 8.8-nm FWHM, and 631 nm with 10.2-nm FWHM), which were selected and validated by a hyperspectral imaging system, was developed for a real-time, on-line application. A total image processing time required to perform the current multispectral images captured by a common aperture camera was approximately 251 msec or 3.99 frames/sec. A preliminary test shows that the accuracy of real-time multispectral imaging system to detect feces and ingesta on corn/soybean fed poultry carcasses was 96%. However, many false positive spots that cause system errors were also detected.

  15. Electronic pictures from charged-coupled devices

    NASA Technical Reports Server (NTRS)

    Mccann, D. H.; Turly, A. P.; White, M.

    1979-01-01

    Imaging system uses charge-coupled devices (CCD's) to generate TV-like pictures with high resolution, sensitivity, and signal-to-noise ratio. It combines detectors for five spectral bands as well as processing and control circuitry all on single silicon chip.

  16. Improving Spectral Image Classification through Band-Ratio Optimization and Pixel Clustering

    NASA Astrophysics Data System (ADS)

    O'Neill, M.; Burt, C.; McKenna, I.; Kimblin, C.

    2017-12-01

    The Underground Nuclear Explosion Signatures Experiment (UNESE) seeks to characterize non-prompt observables from underground nuclear explosions (UNE). As part of this effort, we evaluated the ability of DigitalGlobe's WorldView-3 (WV3) to detect and map UNE signatures. WV3 is the current state-of-the-art, commercial, multispectral imaging satellite; however, it has relatively limited spectral and spatial resolutions. These limitations impede image classifiers from detecting targets that are spatially small and lack distinct spectral features. In order to improve classification results, we developed custom algorithms to reduce false positive rates while increasing true positive rates via a band-ratio optimization and pixel clustering front-end. The clusters resulting from these algorithms were processed with standard spectral image classifiers such as Mixture-Tuned Matched Filter (MTMF) and Adaptive Coherence Estimator (ACE). WV3 and AVIRIS data of Cuprite, Nevada, were used as a validation data set. These data were processed with a standard classification approach using MTMF and ACE algorithms. They were also processed using the custom front-end prior to the standard approach. A comparison of the results shows that the custom front-end significantly increases the true positive rate and decreases the false positive rate.This work was done by National Security Technologies, LLC, under Contract No. DE-AC52-06NA25946 with the U.S. Department of Energy. DOE/NV/25946-3283.

  17. Semi-Automatic Normalization of Multitemporal Remote Images Based on Vegetative Pseudo-Invariant Features

    PubMed Central

    Garcia-Torres, Luis; Caballero-Novella, Juan J.; Gómez-Candón, David; De-Castro, Ana Isabel

    2014-01-01

    A procedure to achieve the semi-automatic relative image normalization of multitemporal remote images of an agricultural scene called ARIN was developed using the following procedures: 1) defining the same parcel of selected vegetative pseudo-invariant features (VPIFs) in each multitemporal image; 2) extracting data concerning the VPIF spectral bands from each image; 3) calculating the correction factors (CFs) for each image band to fit each image band to the average value of the image series; and 4) obtaining the normalized images by linear transformation of each original image band through the corresponding CF. ARIN software was developed to semi-automatically perform the ARIN procedure. We have validated ARIN using seven GeoEye-1 satellite images taken over the same location in Southern Spain from early April to October 2010 at an interval of approximately 3 to 4 weeks. The following three VPIFs were chosen: citrus orchards (CIT), olive orchards (OLI) and poplar groves (POP). In the ARIN-normalized images, the range, standard deviation (s. d.) and root mean square error (RMSE) of the spectral bands and vegetation indices were considerably reduced compared to the original images, regardless of the VPIF or the combination of VPIFs selected for normalization, which demonstrates the method’s efficacy. The correlation coefficients between the CFs among VPIFs for any spectral band (and all bands overall) were calculated to be at least 0.85 and were significant at P = 0.95, indicating that the normalization procedure was comparably performed regardless of the VPIF chosen. ARIN method was designed only for agricultural and forestry landscapes where VPIFs can be identified. PMID:24604031

  18. The Cellular Origins of the Outer Retinal Bands in Optical Coherence Tomography Images

    PubMed Central

    Jonnal, Ravi S.; Kocaoglu, Omer P.; Zawadzki, Robert J.; Lee, Sang-Hyuck; Werner, John S.; Miller, Donald T.

    2014-01-01

    Purpose. To test the recently proposed hypothesis that the second outer retinal band, observed in clinical OCT images, originates from the inner segment ellipsoid, by measuring: (1) the thickness of this band within single cone photoreceptors, and (2) its respective distance from the putative external limiting membrane (band 1) and cone outer segment tips (band 3). Methods. Adaptive optics-optical coherence tomography images were acquired from four subjects without known retinal disease. Images were obtained at foveal (2°) and perifoveal (5°) locations. Cone photoreceptors (n = 9593) were identified and segmented in three dimensions using custom software. Features corresponding to bands 1, 2, and 3 were automatically identified. The thickness of band 2 was assessed in each cell by fitting the longitudinal reflectance profile of the band with a Gaussian function. Distances between bands 1 and 2, and between 2 and 3, respectively, were also measured in each cell. Two independent calibration techniques were employed to determine the depth scale (physical length per pixel) of the imaging system. Results. When resolved within single cells, the thickness of band 2 is a factor of three to four times narrower than in corresponding clinical OCT images. The distribution of band 2 thickness across subjects and eccentricities had a modal value of 4.7 μm, with 48% of the cones falling between 4.1 and 5.2 μm. No significant differences were found between cells in the fovea and perifovea. The distance separating bands 1 and 2 was found to be larger than the distance between bands 2 and 3, across subjects and eccentricities, with a significantly larger difference at 5° than 2°. Conclusions. On the basis of these findings, we suggest that ascription of the outer retinal band 2 to the inner segment ellipsoid is unjustified, because the ellipsoid is both too thick and proximally located to produce the band. PMID:25324288

  19. VIIRS day-night band gain and offset determination and performance

    NASA Astrophysics Data System (ADS)

    Geis, J.; Florio, C.; Moyer, D.; Rausch, K.; De Luccia, F. J.

    2012-09-01

    On October 28th, 2011, the Visible-Infrared Imaging Radiometer Suite (VIIRS) was launched on-board the Suomi National Polar-orbiting Partnership (NPP) spacecraft. The instrument has 22 spectral bands: 14 reflective solar bands (RSB), 7 thermal emissive bands (TEB), and a Day Night Band (DNB). The DNB is a panchromatic, solar reflective band that provides visible through near infrared (IR) imagery of earth scenes with radiances spanning 7 orders of magnitude. In order to function over this large dynamic range, the DNB employs a focal plane array (FPA) consisting of three gain stages: the low gain stage (LGS), the medium gain stage (MGS), and the high gain stage (HGS). The final product generated from a DNB raw data record (RDR) is a radiance sensor data record (SDR). Generation of the SDR requires accurate knowledge of the dark offsets and gain coefficients for each DNB stage. These are measured on-orbit and stored in lookup tables (LUT) that are used during ground processing. This paper will discuss the details of the offset and gain measurement, data analysis methodologies, the operational LUT update process, and results to date including a first look at trending of these parameters over the early life of the instrument.

  20. Three frequency false-color image of Oberpfaffenhofen supersite in Germany

    NASA Image and Video Library

    1994-04-18

    STS059-S-080 (18 April 1994) --- This is a false-color three frequency image of the Oberpfaffenhofen supersite, an area just south-west of Munich in southern Germany. The colors show the different conditions that the three radars (X-Band, C-Band and L-Band) can see on the ground. The image covers a 27 by 36 kilometer area. The center of the site is 48.09 degrees north and 11.29 degrees east. The image was acquired by the Spaceborne Imaging Radar-C and X-Band Synthetic Aperture Radar (SIR-C/X-SAR) onboard the Space Shuttle Endeavour on April 11, 1994. The dark area on the left is Lake Ammersee. The two smaller lakes are the Woerthsee and the Pilsensee. On the bottom is the tip of the Starnbergersee. The city of Munich is located just beyond the right of the image. The Oberpfaffenhofen supersite is the major test site for SIR-C/X-SAR calibration and scientific investigations concerning agriculture, forestry, hydrology and geology. This color composite image is a three frequency overlay. L-Band total power was assigned red, the C-Band total power is shown in green and the X-Band VV polarization appears blue. The colors on the image stress the differences between the L-Band, C-Band, X-Band images. If the three radar antennas were getting an equal response from objects on the ground, this image would appear in black and white. However, in this image, the blue areas corresponds to area for which the X-Band backscatter is relatively higher than the backscatter at L and C-Bands. This behavior is characteristic of grasslands, clear cuts and shorter vegetation. Similarly, the forested areas have a reddish tint (L-Band). The green areas seen near both the Ammersee and the Pilsensee lakes indicate marshy areas. The agricultural fields in the upper right hand corner appear mostly in blue and green (X-Band and C-Band). The white areas are mostly urban areas, while the smooth surfaces of the lakes appear very dark. SIR-C/X-SAR is part of NASA's Mission to Planet Earth (MTPE). SIR-C/X-SAR radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-Band (24 cm), C-Band (6 cm), and X-Band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory (JPL). X-SAR was developed by the Dornire and Alenia Spazio Companies for the German Space Agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian Space Agency, Agenzia Spaziale Italiana (ASI). JPL Photo ID: P-43930

  1. Multi-frequency fine resolution imaging radar instrumentation and data acquisition. [side-looking radar for airborne imagery

    NASA Technical Reports Server (NTRS)

    Rendleman, R. A.; Champagne, E. B.; Ferris, J. E.; Liskow, C. L.; Marks, J. M.; Salmer, R. J.

    1974-01-01

    Development of a dual polarized L-band radar imaging system to be used in conjunction with the present dual polarized X-band radar is described. The technique used called for heterodyning the transmitted frequency from X-band to L-band and again heterodyning the received L-band signals back to X-band for amplification, detection, and recording.

  2. Development of an Automatic Echo-counting Program for HROFFT Spectrograms

    NASA Astrophysics Data System (ADS)

    Noguchi, Kazuya; Yamamoto, Masa-Yuki

    2008-06-01

    Radio meteor observations by Ham-band beacon or FM radio broadcasts using “Ham-band Radio meteor Observation Fast Fourier Transform” (HROFFT) an automatic operating software have been performed widely in recent days. Previously, counting of meteor echoes on the spectrograms of radio meteor observation was performed manually by observers. In the present paper, we introduce an automatic meteor echo counting software application. Although output images of the HROFFT contain both the features of meteor echoes and those of various types of noises, a newly developed image processing technique has been applied, resulting in software that enables a useful auto-counting tool. There exists a slight error in the processing on spectrograms when the observation site is affected by many disturbing noises. Nevertheless, comparison between software and manual counting revealed an agreement of almost 90%. Therefore, we can easily obtain a dataset of detection time, duration time, signal strength, and Doppler shift of each meteor echo from the HROFFT spectrograms. Using this software, statistical analyses of meteor activities is based on the results obtained at many Ham-band Radio meteor Observation (HRO) sites throughout the world, resulting in a very useful “standard” for monitoring meteor stream activities in real time.

  3. 47 CFR 97.305 - Authorized emission types.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Standards see § 97.307(f), paragraph: MF: 160 m Entire band RTTY, data (3). 160 m Entire band Phone, image (1), (2). HF: 80 m Entire band RTTY, data (3), (9). 75 m Entire band Phone, image (1), (2). 40 m 7.000-7.100 MHz RTTY, data (3), (9) 40 m 7.075-7.100 MHz Phone, image (1), (2), (9), (11) 40 m 7.100-7...

  4. 47 CFR 97.305 - Authorized emission types.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Standards see § 97.307(f), paragraph: MF: 160 m Entire band RTTY, data (3). 160 m Entire band Phone, image (1), (2). HF: 80 m Entire band RTTY, data (3), (9). 75 m Entire band Phone, image (1), (2). 60 m 5...), (9) 40 m 7.075-7.100 MHz Phone, image (1), (2), (9), (11) 40 m 7.100-7.125 MHz RTTY, data (3), (9) 40...

  5. 47 CFR 97.305 - Authorized emission types.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Standards see § 97.307(f), paragraph: MF: 160 m Entire band RTTY, data (3). 160 m Entire band Phone, image (1), (2). HF: 80 m Entire band RTTY, data (3), (9). 75 m Entire band Phone, image (1), (2). 40 m 7.000-7.100 MHz RTTY, data (3), (9) 40 m 7.075-7.100 MHz Phone, image (1), (2), (9), (11) 40 m 7.100-7...

  6. An Optical/Near-infrared Investigation of HD 100546 b with the Gemini Planet Imager and MagAO

    NASA Astrophysics Data System (ADS)

    Rameau, Julien; Follette, Katherine B.; Pueyo, Laurent; Marois, Christian; Macintosh, Bruce; Millar-Blanchaer, Maxwell; Wang, Jason J.; Vega, David; Doyon, René; Lafrenière, David; Nielsen, Eric L.; Bailey, Vanessa; Chilcote, Jeffrey K.; Close, Laird M.; Esposito, Thomas M.; Males, Jared R.; Metchev, Stanimir; Morzinski, Katie M.; Ruffio, Jean-Baptiste; Wolff, Schuyler G.; Ammons, S. M.; Barman, Travis S.; Bulger, Joanna; Cotten, Tara; De Rosa, Robert J.; Duchene, Gaspard; Fitzgerald, Michael P.; Goodsell, Stephen; Graham, James R.; Greenbaum, Alexandra Z.; Hibon, Pascale; Hung, Li-Wei; Ingraham, Patrick; Kalas, Paul; Konopacky, Quinn; Larkin, James E.; Maire, Jérôme; Marchis, Franck; Oppenheimer, Rebecca; Palmer, David; Patience, Jennifer; Perrin, Marshall D.; Poyneer, Lisa; Rajan, Abhijith; Rantakyrö, Fredrik T.; Marley, Mark S.; Savransky, Dmitry; Schneider, Adam C.; Sivaramakrishnan, Anand; Song, Inseok; Soummer, Remi; Thomas, Sandrine; Wallace, J. Kent; Ward-Duong, Kimberly; Wiktorowicz, Sloane

    2017-06-01

    We present H band spectroscopic and Hα photometric observations of HD 100546 obtained with the Gemini Planet Imager and the Magellan Visible AO camera. We detect H band emission at the location of the protoplanet HD 100546 b, but show that the choice of data processing parameters strongly affects the morphology of this source. It appears point-like in some aggressive reductions, but rejoins an extended disk structure in the majority of the others. Furthermore, we demonstrate that this emission appears stationary on a timescale of 4.6 years, inconsistent at the 2σ level with a Keplerian clockwise orbit at 59 au in the disk plane. The H band spectrum of the emission is inconsistent with any type of low effective temperature object or accreting protoplanetary disk. It strongly suggests a scattered-light origin, as this is consistent with the spectrum of the star and the spectra extracted at other locations in the disk. A non-detection at the 5σ level of HD 100546 b in differential Hα imaging places an upper limit, assuming the protoplanet lies in a gap free of extinction, on the accretion luminosity of 1.7 × 10-4 L ⊙ and M\\dot{M}< 6.3× {10}-7 {M}{Jup}2 {{yr}}-1 for 1 R Jup. These limits are comparable to the accretion luminosity and accretion rate of T-Tauri stars or LkCa 15 b. Taken together, these lines of evidence suggest that the H band source at the location of HD 100546 b is not emitted by a planetary photosphere or an accreting circumplanetary disk but is a disk feature enhanced by the point-spread function subtraction process. This non-detection is consistent with the non-detection in the K band reported in an earlier study but does not exclude the possibility that HD 100546 b is deeply embedded.

  7. Advanced processing for high-bandwidth sensor systems

    NASA Astrophysics Data System (ADS)

    Szymanski, John J.; Blain, Phil C.; Bloch, Jeffrey J.; Brislawn, Christopher M.; Brumby, Steven P.; Cafferty, Maureen M.; Dunham, Mark E.; Frigo, Janette R.; Gokhale, Maya; Harvey, Neal R.; Kenyon, Garrett; Kim, Won-Ha; Layne, J.; Lavenier, Dominique D.; McCabe, Kevin P.; Mitchell, Melanie; Moore, Kurt R.; Perkins, Simon J.; Porter, Reid B.; Robinson, S.; Salazar, Alfonso; Theiler, James P.; Young, Aaron C.

    2000-11-01

    Compute performance and algorithm design are key problems of image processing and scientific computing in general. For example, imaging spectrometers are capable of producing data in hundreds of spectral bands with millions of pixels. These data sets show great promise for remote sensing applications, but require new and computationally intensive processing. The goal of the Deployable Adaptive Processing Systems (DAPS) project at Los Alamos National Laboratory is to develop advanced processing hardware and algorithms for high-bandwidth sensor applications. The project has produced electronics for processing multi- and hyper-spectral sensor data, as well as LIDAR data, while employing processing elements using a variety of technologies. The project team is currently working on reconfigurable computing technology and advanced feature extraction techniques, with an emphasis on their application to image and RF signal processing. This paper presents reconfigurable computing technology and advanced feature extraction algorithm work and their application to multi- and hyperspectral image processing. Related projects on genetic algorithms as applied to image processing will be introduced, as will the collaboration between the DAPS project and the DARPA Adaptive Computing Systems program. Further details are presented in other talks during this conference and in other conferences taking place during this symposium.

  8. Atmospheric correction using near-infrared bands for satellite ocean color data processing in the turbid western Pacific region.

    PubMed

    Wang, Menghua; Shi, Wei; Jiang, Lide

    2012-01-16

    A regional near-infrared (NIR) ocean normalized water-leaving radiance (nL(w)(λ)) model is proposed for atmospheric correction for ocean color data processing in the western Pacific region, including the Bohai Sea, Yellow Sea, and East China Sea. Our motivation for this work is to derive ocean color products in the highly turbid western Pacific region using the Geostationary Ocean Color Imager (GOCI) onboard South Korean Communication, Ocean, and Meteorological Satellite (COMS). GOCI has eight spectral bands from 412 to 865 nm but does not have shortwave infrared (SWIR) bands that are needed for satellite ocean color remote sensing in the turbid ocean region. Based on a regional empirical relationship between the NIR nL(w)(λ) and diffuse attenuation coefficient at 490 nm (K(d)(490)), which is derived from the long-term measurements with the Moderate-resolution Imaging Spectroradiometer (MODIS) on the Aqua satellite, an iterative scheme with the NIR-based atmospheric correction algorithm has been developed. Results from MODIS-Aqua measurements show that ocean color products in the region derived from the new proposed NIR-corrected atmospheric correction algorithm match well with those from the SWIR atmospheric correction algorithm. Thus, the proposed new atmospheric correction method provides an alternative for ocean color data processing for GOCI (and other ocean color satellite sensors without SWIR bands) in the turbid ocean regions of the Bohai Sea, Yellow Sea, and East China Sea, although the SWIR-based atmospheric correction approach is still much preferred. The proposed atmospheric correction methodology can also be applied to other turbid coastal regions.

  9. Visibility through the gaseous smoke in airborne remote sensing using a DSLR camera

    NASA Astrophysics Data System (ADS)

    Chabok, Mirahmad; Millington, Andrew; Hacker, Jorg M.; McGrath, Andrew J.

    2016-08-01

    Visibility and clarity of remotely sensed images acquired by consumer grade DSLR cameras, mounted on an unmanned aerial vehicle or a manned aircraft, are critical factors in obtaining accurate and detailed information from any area of interest. The presence of substantial haze, fog or gaseous smoke particles; caused, for example, by an active bushfire at the time of data capture, will dramatically reduce image visibility and quality. Although most modern hyperspectral imaging sensors are capable of capturing a large number of narrow range bands of the shortwave and thermal infrared spectral range, which have the potential to penetrate smoke and haze, the resulting images do not contain sufficient spatial detail to enable locating important objects or assist search and rescue or similar applications which require high resolution information. We introduce a new method for penetrating gaseous smoke without compromising spatial resolution using a single modified DSLR camera in conjunction with image processing techniques which effectively improves the visibility of objects in the captured images. This is achieved by modifying a DSLR camera and adding a custom optical filter to enable it to capture wavelengths from 480-1200nm (R, G and Near Infrared) instead of the standard RGB bands (400-700nm). With this modified camera mounted on an aircraft, images were acquired over an area polluted by gaseous smoke from an active bushfire. Processed data using our proposed method shows significant visibility improvements compared with other existing solutions.

  10. Techniques for using diazo materials in remote sensor data analysis

    NASA Technical Reports Server (NTRS)

    Whitebay, L. E.; Mount, S.

    1978-01-01

    The use of data derived from LANDSAT is facilitated when special products or computer enhanced images can be analyzed. However, the facilities required to produce and analyze such products prevent many users from taking full advantages of the LANDSAT data. A simple, low-cost method is presented by which users can make their own specially enhanced composite images from the four band black and white LANDSAT images by using the diazo process. The diazo process is described and a detailed procedure for making various color composites, such as color infrared, false natural color, and false color, is provided. The advantages and limitations of the diazo process are discussed. A brief discussion interpretation of diazo composites for land use mapping with some typical examples is included.

  11. Phase Grating Design for a Dual-Band Snapshot Imaging Spectrometer

    NASA Astrophysics Data System (ADS)

    Scholl, James F.; Dereniak, Eustace L.; Descour, Michael R.; Tebow, Christopher P.; Volin, Curtis E.

    2003-01-01

    Infrared spectral features have proved useful in the identification of threat objects. Dual-band focal-plane arrays (FPAs) have been developed in which each pixel consists of superimposed midwave and long-wave photodetectors [Dyer and Tidrow, Conference on Infrared Detectors and Focal Plane Arrays (SPIE, Bellingham, Wash., 1999), pp. 434 -440 . Combining dual-band FPAs with imaging spectrometers capable of interband hyperspectral resolution greatly improves spatial target discrimination. The computed-tomography imaging spectrometer (CTIS) ] [Descour and Dereniak, Appl. Opt. 34, 4817 -4826 (1995) has proved effective in producing hyperspectral images in a single spectral region. Coupling the CTIS with a dual-band detector can produce two hyperspectral data cubes simultaneously. We describe the design of two-dimensional, surface-relief, computer-generated hologram dispersers that permit image information in these two bands simultaneously.

  12. Narrow band imaging combined with water immersion technique in the diagnosis of celiac disease.

    PubMed

    Valitutti, Francesco; Oliva, Salvatore; Iorfida, Donatella; Aloi, Marina; Gatti, Silvia; Trovato, Chiara Maria; Montuori, Monica; Tiberti, Antonio; Cucchiara, Salvatore; Di Nardo, Giovanni

    2014-12-01

    The "multiple-biopsy" approach both in duodenum and bulb is the best strategy to confirm the diagnosis of celiac disease; however, this increases the invasiveness of the procedure itself and is time-consuming. To evaluate the diagnostic yield of a single biopsy guided by narrow-band imaging combined with water immersion technique in paediatric patients. Prospective assessment of the diagnostic accuracy of narrow-band imaging/water immersion technique-driven biopsy approach versus standard protocol in suspected celiac disease. The experimental approach correctly diagnosed 35/40 children with celiac disease, with an overall diagnostic sensitivity of 87.5% (95% CI: 77.3-97.7). An altered pattern of narrow-band imaging/water immersion technique endoscopic visualization was significantly associated with villous atrophy at guided biopsy (Spearman Rho 0.637, p<0.001). Concordance of narrow-band imaging/water immersion technique endoscopic assessments was high between two operators (K: 0.884). The experimental protocol was highly timesaving compared to the standard protocol. An altered narrow-band imaging/water immersion technique pattern coupled with high anti-transglutaminase antibodies could allow a single guided biopsy to diagnose celiac disease. When no altered mucosal pattern is visible even by narrow-band imaging/water immersion technique, multiple bulbar and duodenal biopsies should be obtained. Copyright © 2014. Published by Elsevier Ltd.

  13. Two- and three-dimensional ultrasound imaging to facilitate detection and targeting of taut bands in myofascial pain syndrome.

    PubMed

    Shankar, Hariharan; Reddy, Sapna

    2012-07-01

    Ultrasound imaging has gained acceptance in pain management interventions. Features of myofascial pain syndrome have been explored using ultrasound imaging and elastography. There is a paucity of reports showing the benefit clinically. This report provides three-dimensional features of taut bands and highlights the advantages of using two-dimensional ultrasound imaging to improve targeting of taut bands in deeper locations. Fifty-eight-year-old man with pain and decreased range of motion of the right shoulder was referred for further management of pain above the scapula after having failed conservative management for myofascial pain syndrome. Three-dimensional ultrasound images provided evidence of aberrancy in the architecture of the muscle fascicles around the taut bands compared to the adjacent normal muscle tissue during serial sectioning of the accrued image. On two-dimensional ultrasound imaging over the palpated taut band, areas of hyperechogenicity were visualized in the trapezius and supraspinatus muscles. Subsequently, the patient received ultrasound-guided real-time lidocaine injections to the trigger points with successful resolution of symptoms. This is a successful demonstration of utility of ultrasound imaging of taut bands in the management of myofascial pain syndrome. Utility of this imaging modality in myofascial pain syndrome requires further clinical validation. Wiley Periodicals, Inc.

  14. Quantification of fibre polymerization through Fourier space image analysis

    PubMed Central

    Nekouzadeh, Ali; Genin, Guy M.

    2011-01-01

    Quantification of changes in the total length of randomly oriented and possibly curved lines appearing in an image is a necessity in a wide variety of biological applications. Here, we present an automated approach based upon Fourier space analysis. Scaled, band-pass filtered power spectral densities of greyscale images are integrated to provide a quantitative measurement of the total length of lines of a particular range of thicknesses appearing in an image. A procedure is presented to correct for changes in image intensity. The method is most accurate for two-dimensional processes with fibres that do not occlude one another. PMID:24959096

  15. Application of asymmetric mapping and selective filtering (AM and SF) method to Cosmo/SkyMed images by implementation of a selective blocks approach for ship detection optimization in SEASAFE framework

    NASA Astrophysics Data System (ADS)

    Loreggia, D.; Tataranni, F.; Trivero, P.; Biamino, W.; Di Matteo, L.

    2017-10-01

    We present the implementation of a procedure to adapt an Asymmetric Wiener Filtering (AWF) methodology aimed to detect and discard ghost signal due to azimuth ambiguities in SAR images to the case for X-band Cosmo Sky Med (CSK) images in the framework of SEASAFE (Slick Emissions And Ship Automatic Features Extraction) project, developed at the Department of Science and Technology Innovation of the University of Piemonte Orientale, Alessandria, Italy. SAR is a useful tool to daily and nightly monitoring of the sea surface in all weather conditions. SEASAFE project is a software platform developed in IDL language able to process data in C- Land X-band SAR images with enhanced algorithm modules for land masking, sea pollution (oil spills) and ship detection; wind and wave evaluation are also available. In this contest, the need to individuate and discard false alarms is a critical requirement. The azimuth ambiguity is one of the main causes that generate false alarm in the ship detection procedure. Many methods to face with this problem were proposed and presented in recent literature. After a review of different approach to this problem, we describe the procedure to adapt the AWF approach presented in [1,2] to the case of X-band CSK images by implementing a selective blocks approach.

  16. Low SWaP multispectral sensors using dichroic filter arrays

    NASA Astrophysics Data System (ADS)

    Dougherty, John; Varghese, Ron

    2015-06-01

    The benefits of multispectral imaging are well established in a variety of applications including remote sensing, authentication, satellite and aerial surveillance, machine vision, biomedical, and other scientific and industrial uses. However, many of the potential solutions require more compact, robust, and cost-effective cameras to realize these benefits. The next generation of multispectral sensors and cameras needs to deliver improvements in size, weight, power, portability, and spectral band customization to support widespread deployment for a variety of purpose-built aerial, unmanned, and scientific applications. A novel implementation uses micro-patterning of dichroic filters1 into Bayer and custom mosaics, enabling true real-time multispectral imaging with simultaneous multi-band image acquisition. Consistent with color image processing, individual spectral channels are de-mosaiced with each channel providing an image of the field of view. This approach can be implemented across a variety of wavelength ranges and on a variety of detector types including linear, area, silicon, and InGaAs. This dichroic filter array approach can also reduce payloads and increase range for unmanned systems, with the capability to support both handheld and autonomous systems. Recent examples and results of 4 band RGB + NIR dichroic filter arrays in multispectral cameras are discussed. Benefits and tradeoffs of multispectral sensors using dichroic filter arrays are compared with alternative approaches - including their passivity, spectral range, customization options, and scalable production.

  17. Proceedings of the Eleventh International Symposium on Remote Sensing of Environment, volume 2. [application and processing of remotely sensed data

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Application and processing of remotely sensed data are discussed. Areas of application include: pollution monitoring, water quality, land use, marine resources, ocean surface properties, and agriculture. Image processing and scene analysis are described along with automated photointerpretation and classification techniques. Data from infrared and multispectral band scanners onboard LANDSAT satellites are emphasized.

  18. Time course of gamma-band oscillation associated with face processing in the inferior occipital gyrus and fusiform gyrus: A combined fMRI and MEG study.

    PubMed

    Uono, Shota; Sato, Wataru; Kochiyama, Takanori; Kubota, Yasutaka; Sawada, Reiko; Yoshimura, Sayaka; Toichi, Motomi

    2017-04-01

    Debate continues over whether the inferior occipital gyrus (IOG) or the fusiform gyrus (FG) represents the first stage of face processing and what role these brain regions play. We investigated this issue by combining functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) in normal adults. Participants passively observed upright and inverted faces and houses. First, we identified the IOG and FG as face-specific regions using fMRI. We applied beamforming source reconstruction and time-frequency analysis to MEG source signals to reveal the time course of gamma-band activations in these regions. The results revealed that the right IOG showed higher gamma-band activation in response to upright faces than to upright houses at 100 ms from the stimulus onset. Subsequently, the right FG showed greater gamma-band response to upright faces versus upright houses at around 170 ms. The gamma-band activation in the right IOG and right FG was larger in response to inverted faces than to upright faces at the later time window. These results suggest that (1) the gamma-band activities occurs rapidly first in the IOG and next in the FG and (2) the gamma-band activity in the right IOG at later time stages is involved in configuration processing for faces. Hum Brain Mapp 38:2067-2079, 2017. © 2017 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. High Throughput Multispectral Image Processing with Applications in Food Science.

    PubMed

    Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John

    2015-01-01

    Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.

  20. Sentinel-2 ArcGIS Tool for Environmental Monitoring

    NASA Astrophysics Data System (ADS)

    Plesoianu, Alin; Cosmin Sandric, Ionut; Anca, Paula; Vasile, Alexandru; Calugaru, Andreea; Vasile, Cristian; Zavate, Lucian

    2017-04-01

    This paper addresses one of the biggest challenges regarding Sentinel-2 data, related to the need of an efficient tool to access and process the large collection of images that are available. Consequently, developing a tool for the automation of Sentinel-2 data analysis is the most immediate need. We developed a series of tools for the automation of Sentinel-2 data download and processing for vegetation health monitoring. The tools automatically perform the following operations: downloading image tiles from ESA's Scientific Hub or other venders (Amazon), pre-processing of the images to extract the 10-m bands, creating image composites, applying a series of vegetation indexes (NDVI, OSAVI, etc.) and performing change detection analyses on different temporal data sets. All of these tools run in a dynamic way in the ArcGIS Platform, without the need of creating intermediate datasets (rasters, layers), as the images are processed on-the-fly in order to avoid data duplication. Finally, they allow complete integration with the ArcGIS environment and workflows

  1. Waves in Airglow

    NASA Image and Video Library

    2017-12-08

    In April 2012, waves in Earth’s “airglow” spread across the nighttime skies of northern Texas like ripples in a pond. In this case, the waves were provoked by a massive thunderstorm. Airglow is a layer of nighttime light emissions caused by chemical reactions high in Earth’s atmosphere. A variety of reactions involving oxygen, sodium, ozone and nitrogen result in the production of a very faint amount of light. In fact, it’s approximately one billion times fainter than sunlight (~10-11 to 10-9 W·cm-2· sr-1). This chemiluminescence is similar to the chemical reactions that light up a glow stick or glow-in-the-dark silly putty. The “day-night band,” of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi NPP satellite captured these glowing ripples in the night sky on April 15, 2012 (top image). The day-night band detects lights over a range of wavelengths from green to near-infrared and uses highly sensitive electronics to observe low light signals. (The absolute minimum signals detectable are at the levels of nightglow emission.) The lower image shows the thunderstorm as observed by a thermal infrared band on VIIRS. This thermal band, which is sensitive only to heat emissions (cold clouds appear white), is not sensitive to the subtle visible-light wave structures seen by the day-night band. Technically speaking, airglow occurs at all times. During the day it is called “dayglow,” at twilight “twilightglow,” and at night “nightglow.” There are slightly different processes taking place in each case, but in the image above the source of light is nightglow. The strongest nightglow emissions are mostly constrained to a relatively thin layer of atmosphere between 85 and 95 kilometers (53 and 60 miles) above the Earth’s surface. Little emission occurs below this layer since there’s a higher concentration of molecules, allowing for dissipation of chemical energy via collisions rather than light production. Likewise, little emission occurs above that layer because the atmospheric density is so tenuous that there are too few light-emitting reactions to yield an appreciable amount of light. Suomi NPP is in orbit around Earth at 834 kilometers (about 518 miles), well above the nightglow layer. The day-night band imagery therefore contains signals from the direction upward emission of the nightglow layer and the reflection of the downward nightglow emissions by clouds and the Earth’s surface. The presence of these nightglow waves is a graphic visualization of the usually unseen energy transfer processes that occur continuously between the lower and upper atmosphere. While nightglow is a well-known phenomenon, it’s not typically considered by Earth-viewing meteorological sensors. In fact, scientists were surprised at Suomi NPP’s ability to detect it. During the satellite’s check-out procedure, this unanticipated source of visible light was thought to indicate a problem with the sensor until scientists realized that what they were seeing was the faintest of light in the darkness of night. NASA Earth Observatory image by Jesse Allen and Robert Simmon, using VIIRS Day-Night Band data from the Suomi National Polar-orbiting Partnership. Suomi NPP is the result of a partnership between NASA, the National Oceanic and Atmospheric Administration, and the Department of Defense. Caption by Aries Keck and Steve Miller. Instrument: Suomi NPP - VIIRS Credit: NASA Earth Observatory Click here to view all of the Earth at Night 2012 images Click here to read more about this image NASA image use policy. NASA Goddard Space

  2. Tri-band optical coherence tomography for lipid and vessel spectroscopic imaging

    NASA Astrophysics Data System (ADS)

    Yu, Luoqin; Kang, Jiqiang; Wang, Xie; Wei, Xiaoming; Chan, Kin-Tak; Lee, Nikki P.; Wong, Kenneth K. Y.

    2016-03-01

    Optical coherence tomography (OCT) has been utilized for various functional imaging applications. One of its highlights comes from spectroscopic imaging, which can simultaneously obtain both morphologic and spectroscopic information. Assisting diagnosis and therapeutic intervention of coronary artery disease is one of the major directions in spectroscopic OCT applications. Previously Tanaka et al. have developed a spectral domain OCT (SDOCT) to image lipid distribution within blood vessel [1]. In the meantime, Fleming et al. have demonstrated optical frequency domain imaging (OFDI) by a 1.3-μm swept source and quadratic discriminant analysis model [2]. However, these systems suffered from burdensome computation as the optical properties' variation was calculated from a single-band illumination that provided limited contrast. On the other hand, multi-band OCT facilitates contrast enhancement with separated wavelength bands, which further offers an easier way to distinguish different materials. Federici and Dubois [3] and Tsai and Chan [4] have demonstrated tri-band OCT systems to further enhance the image contrast. However, these previous work provided under-explored functional properties. Our group has reported a dual-band OCT system based on parametrically amplified Fourier domain mode-locked (FDML) laser with time multiplexing scheme [5] and a dual-band FDML laser OCT system with wavelength-division multiplexing [6]. Fiber optical parametric amplifier (OPA) can be ideally incorporated in multi-band spectroscopic OCT system as it has a broad amplification window and offers an additional output range at idler band, which is phase matched with the signal band. The sweeping ranges can thus overcome traditional wavelength bands that are limited by intra-cavity amplifiers in FDML lasers. Here, we combines the dual-band FDML laser together with fiber OPA, which consequently renders a simultaneous tri-band output at 1.3, 1.5, and 1.6 μm, for intravascular applications. Lipid and blood vessel distribution can be subsequently visualized with the tri-band OCT system by ex vivo experiments using porcine artery model with artificial lipid plaques.

  3. First Science Verification of the VLA Sky Survey Pilot

    NASA Astrophysics Data System (ADS)

    Cavanaugh, Amy

    2017-01-01

    My research involved analyzing test images by Steve Myers for the upcoming VLA Sky Survey. This survey will cover the entire sky visible from the VLA site in S band (2-4 GHz). The VLA will be in B configuration for the survey, as it was when the test images were produced, meaning a resolution of approximately 2.5 arcseconds. Conducted using On-the-Fly mode, the survey will have a speed of approximately 20 deg2 hr-1 (including overhead). New Python imaging scripts are being developed and improved to process the VLASS images. My research consisted of comparing a continuum test image over S band (from the new imaging scripts) to two previous images of the same region of the sky (from the CNSS and FIRST surveys), as well as comparing the continuum image to single spectral windows (from the new imaging scripts and of the same sky region). By comparing our continuum test image to images from CNSS and FIRST, we tested on-the-Fly mode and the imaging script used to produce our images. Another goal was to test whether individual spectral windows could be used in combination to calculate spectral indices close to those produced over S band (based only on our continuum image). Our continuum image contained 64 sources as opposed to the 99 sources found in the CNSS image. The CNSS image also had lower noise level (0.095 mJy/beam compared to 0.119 mJy/beam). Additionally, when our continuum image was compared to the CNSS image, separation showed no dependence on total flux density (in our continuum image). At lower flux densities, sources in our image were brighter than the same ones in the CNSS image. When our continuum image was compared to the FIRST catalog, the spectral index difference showed no dependence on total flux (in our continuum image). In conclusion, the quality of our images did not completely match the quality of the CNSS and FIRST images. More work is needed in developing the new imaging scripts.

  4. Investigation into Cloud Computing for More Robust Automated Bulk Image Geoprocessing

    NASA Technical Reports Server (NTRS)

    Brown, Richard B.; Smoot, James C.; Underwood, Lauren; Armstrong, C. Duane

    2012-01-01

    Geospatial resource assessments frequently require timely geospatial data processing that involves large multivariate remote sensing data sets. In particular, for disasters, response requires rapid access to large data volumes, substantial storage space and high performance processing capability. The processing and distribution of this data into usable information products requires a processing pipeline that can efficiently manage the required storage, computing utilities, and data handling requirements. In recent years, with the availability of cloud computing technology, cloud processing platforms have made available a powerful new computing infrastructure resource that can meet this need. To assess the utility of this resource, this project investigates cloud computing platforms for bulk, automated geoprocessing capabilities with respect to data handling and application development requirements. This presentation is of work being conducted by Applied Sciences Program Office at NASA-Stennis Space Center. A prototypical set of image manipulation and transformation processes that incorporate sample Unmanned Airborne System data were developed to create value-added products and tested for implementation on the "cloud". This project outlines the steps involved in creating and testing of open source software developed process code on a local prototype platform, and then transitioning this code with associated environment requirements into an analogous, but memory and processor enhanced cloud platform. A data processing cloud was used to store both standard digital camera panchromatic and multi-band image data, which were subsequently subjected to standard image processing functions such as NDVI (Normalized Difference Vegetation Index), NDMI (Normalized Difference Moisture Index), band stacking, reprojection, and other similar type data processes. Cloud infrastructure service providers were evaluated by taking these locally tested processing functions, and then applying them to a given cloud-enabled infrastructure to assesses and compare environment setup options and enabled technologies. This project reviews findings that were observed when cloud platforms were evaluated for bulk geoprocessing capabilities based on data handling and application development requirements.

  5. Investigation of Parallax Issues for Multi-Lens Multispectral Camera Band Co-Registration

    NASA Astrophysics Data System (ADS)

    Jhan, J. P.; Rau, J. Y.; Haala, N.; Cramer, M.

    2017-08-01

    The multi-lens multispectral cameras (MSCs), such as Micasense Rededge and Parrot Sequoia, can record multispectral information by each separated lenses. With their lightweight and small size, which making they are more suitable for mounting on an Unmanned Aerial System (UAS) to collect high spatial images for vegetation investigation. However, due to the multi-sensor geometry of multi-lens structure induces significant band misregistration effects in original image, performing band co-registration is necessary in order to obtain accurate spectral information. A robust and adaptive band-to-band image transform (RABBIT) is proposed to perform band co-registration of multi-lens MSCs. First is to obtain the camera rig information from camera system calibration, and utilizes the calibrated results for performing image transformation and lens distortion correction. Since the calibration uncertainty leads to different amount of systematic errors, the last step is to optimize the results in order to acquire a better co-registration accuracy. Due to the potential issues of parallax that will cause significant band misregistration effects when images are closer to the targets, four datasets thus acquired from Rededge and Sequoia were applied to evaluate the performance of RABBIT, including aerial and close-range imagery. From the results of aerial images, it shows that RABBIT can achieve sub-pixel accuracy level that is suitable for the band co-registration purpose of any multi-lens MSC. In addition, the results of close-range images also has same performance, if we focus on the band co-registration on specific target for 3D modelling, or when the target has equal distance to the camera.

  6. Configuration and specifications of an Unmanned Aerial Vehicle (UAV) for early site specific weed management.

    PubMed

    Torres-Sánchez, Jorge; López-Granados, Francisca; De Castro, Ana Isabel; Peña-Barragán, José Manuel

    2013-01-01

    A new aerial platform has risen recently for image acquisition, the Unmanned Aerial Vehicle (UAV). This article describes the technical specifications and configuration of a UAV used to capture remote images for early season site- specific weed management (ESSWM). Image spatial and spectral properties required for weed seedling discrimination were also evaluated. Two different sensors, a still visible camera and a six-band multispectral camera, and three flight altitudes (30, 60 and 100 m) were tested over a naturally infested sunflower field. The main phases of the UAV workflow were the following: 1) mission planning, 2) UAV flight and image acquisition, and 3) image pre-processing. Three different aspects were needed to plan the route: flight area, camera specifications and UAV tasks. The pre-processing phase included the correct alignment of the six bands of the multispectral imagery and the orthorectification and mosaicking of the individual images captured in each flight. The image pixel size, area covered by each image and flight timing were very sensitive to flight altitude. At a lower altitude, the UAV captured images of finer spatial resolution, although the number of images needed to cover the whole field may be a limiting factor due to the energy required for a greater flight length and computational requirements for the further mosaicking process. Spectral differences between weeds, crop and bare soil were significant in the vegetation indices studied (Excess Green Index, Normalised Green-Red Difference Index and Normalised Difference Vegetation Index), mainly at a 30 m altitude. However, greater spectral separability was obtained between vegetation and bare soil with the index NDVI. These results suggest that an agreement among spectral and spatial resolutions is needed to optimise the flight mission according to every agronomical objective as affected by the size of the smaller object to be discriminated (weed plants or weed patches).

  7. Configuration and Specifications of an Unmanned Aerial Vehicle (UAV) for Early Site Specific Weed Management

    PubMed Central

    Torres-Sánchez, Jorge; López-Granados, Francisca; De Castro, Ana Isabel; Peña-Barragán, José Manuel

    2013-01-01

    A new aerial platform has risen recently for image acquisition, the Unmanned Aerial Vehicle (UAV). This article describes the technical specifications and configuration of a UAV used to capture remote images for early season site- specific weed management (ESSWM). Image spatial and spectral properties required for weed seedling discrimination were also evaluated. Two different sensors, a still visible camera and a six-band multispectral camera, and three flight altitudes (30, 60 and 100 m) were tested over a naturally infested sunflower field. The main phases of the UAV workflow were the following: 1) mission planning, 2) UAV flight and image acquisition, and 3) image pre-processing. Three different aspects were needed to plan the route: flight area, camera specifications and UAV tasks. The pre-processing phase included the correct alignment of the six bands of the multispectral imagery and the orthorectification and mosaicking of the individual images captured in each flight. The image pixel size, area covered by each image and flight timing were very sensitive to flight altitude. At a lower altitude, the UAV captured images of finer spatial resolution, although the number of images needed to cover the whole field may be a limiting factor due to the energy required for a greater flight length and computational requirements for the further mosaicking process. Spectral differences between weeds, crop and bare soil were significant in the vegetation indices studied (Excess Green Index, Normalised Green-Red Difference Index and Normalised Difference Vegetation Index), mainly at a 30 m altitude. However, greater spectral separability was obtained between vegetation and bare soil with the index NDVI. These results suggest that an agreement among spectral and spatial resolutions is needed to optimise the flight mission according to every agronomical objective as affected by the size of the smaller object to be discriminated (weed plants or weed patches). PMID:23483997

  8. Raman Hyperspectral Imaging for Detection of Watermelon Seeds Infected with Acidovorax citrulli.

    PubMed

    Lee, Hoonsoo; Kim, Moon S; Qin, Jianwei; Park, Eunsoo; Song, Yu-Rim; Oh, Chang-Sik; Cho, Byoung-Kwan

    2017-09-23

    The bacterial infection of seeds is one of the most important quality factors affecting yield. Conventional detection methods for bacteria-infected seeds, such as biological, serological, and molecular tests, are not feasible since they require expensive equipment, and furthermore, the testing processes are also time-consuming. In this study, we use the Raman hyperspectral imaging technique to distinguish bacteria-infected seeds from healthy seeds as a rapid, accurate, and non-destructive detection tool. We utilize Raman hyperspectral imaging data in the spectral range of 400-1800 cm -1 to determine the optimal band-ratio for the discrimination of watermelon seeds infected by the bacteria Acidovorax citrulli using ANOVA. Two bands at 1076.8 cm -1 and 437 cm -1 are selected as the optimal Raman peaks for the detection of bacteria-infected seeds. The results demonstrate that the Raman hyperspectral imaging technique has a good potential for the detection of bacteria-infected watermelon seeds and that it could form a suitable alternative to conventional methods.

  9. Raman Hyperspectral Imaging for Detection of Watermelon Seeds Infected with Acidovorax citrulli

    PubMed Central

    Lee, Hoonsoo; Kim, Moon S.; Qin, Jianwei; Park, Eunsoo; Song, Yu-Rim; Oh, Chang-Sik

    2017-01-01

    The bacterial infection of seeds is one of the most important quality factors affecting yield. Conventional detection methods for bacteria-infected seeds, such as biological, serological, and molecular tests, are not feasible since they require expensive equipment, and furthermore, the testing processes are also time-consuming. In this study, we use the Raman hyperspectral imaging technique to distinguish bacteria-infected seeds from healthy seeds as a rapid, accurate, and non-destructive detection tool. We utilize Raman hyperspectral imaging data in the spectral range of 400–1800 cm−1 to determine the optimal band-ratio for the discrimination of watermelon seeds infected by the bacteria Acidovorax citrulli using ANOVA. Two bands at 1076.8 cm−1 and 437 cm−1 are selected as the optimal Raman peaks for the detection of bacteria-infected seeds. The results demonstrate that the Raman hyperspectral imaging technique has a good potential for the detection of bacteria-infected watermelon seeds and that it could form a suitable alternative to conventional methods. PMID:28946608

  10. An Evaluation of ALOS Data in Disaster Applications

    NASA Astrophysics Data System (ADS)

    Igarashi, Tamotsu; Igarashi, Tamotsu; Furuta, Ryoich; Ono, Makoto

    ALOS is the advanced land observing satellite, providing image data from onboard sensors; PRISM, AVNIR-2 and PALSAR. PRISM is the sensor of panchromatic stereo, high resolution three-line-scanner to characterize the earth surface. The accuracy of position in image and height of Digital Surface Model (DSM) are high, therefore the geographic information extraction is improved in the field of disaster applications with providing images of disaster area. Especially pan-sharpened 3D image composed with PRISM and the four-band visible near-infrared radiometer AVNIR-2 data is expected to provide information to understand the geographic and topographic feature. PALSAR is the advanced multi-functional synthetic aperture radar (SAR) operated in L-band, appropriate for the use of land surface feature characterization. PALSAR has many improvements from JERS-1/SAR, such as high sensitivity, having high resolution, polarimetric and scan SAR observation modes. PALSAR is also applicable for SAR interferometry processing. This paper describes the evaluation of ALOS data characteristic from the view point of disaster applications, through some exercise applications.

  11. A Baseline-Free Defect Imaging Technique in Plates Using Time Reversal of Lamb Waves

    NASA Astrophysics Data System (ADS)

    Hyunjo, Jeong; Sungjong, Cho; Wei, Wei

    2011-06-01

    We present an analytical investigation for a baseline-free imaging of a defect in plate-like structures using the time-reversal of Lamb waves. We first consider the flexural wave (A0 mode) propagation in a plate containing a defect, and reception and time reversal process of the output signal at the receiver. The received output signal is then composed of two parts: a directly propagated wave and a scattered wave from the defect. The time reversal of these waves recovers the original input signal, and produces two additional sidebands that contain the time-of-flight information on the defect location. One of the side-band signals is then extracted as a pure defect signal. A defect localization image is then constructed from a beamforming technique based on the time-frequency analysis of the side band signal for each transducer pair in a network of sensors. The simulation results show that the proposed scheme enables the accurate, baseline-free imaging of a defect.

  12. Monitoring of mountain glaciers of some regions of gissar-alai mountain system using aster space images

    NASA Astrophysics Data System (ADS)

    Batirov, R.; Yakovlev, A.

    In 1999 the TERRA orbital platform was launched. It is intended for space monitoring of various natural objects on a surface of the Earth and in particular of glaciers. Onboard the orbital platform the Japanese sensor ASTER was installed. Characteristics of the sensor give unique possibility for monitoring glaciers from the space. In the given work the cataloguing of glaciers of some river basins of Alai, Turkestan and Zeravshan ranges of Gissar--Alai mountain system, which in turn is a part of Pamir--Alai mountain system, was fulfilled. In particular, the cataloguing of glaciers of Shahimardan, Sokh, Isfara river basins, and also the basin of Zeravshan glacier was fulfilled. Thematic processing of the images was implemented for the range of the images on the date of the survey -- second half of August 2001--2002 years. The images were granted in the framework of Aster Research Opportunity Scheme (ARO) of Japanese space agency ERSDAC (``Monitoring of mountain glaciers and glacial lakes using ASTER space images'', contract AP-0290). Previous data of glaciation of this region were obtained as per 1957 and 1980 with application of materials of aerial photography (1957) and analogue space images (1980). The ASTER sensor makes survey an earth surface in 14 bands of a spectrum of electromagnetic waves radiated by the Sun -- from the visible up to the thermal infrared. Thus the following three bands are optimal for extraction of glaciological information: Band 1 (visible green) -- 0.52-0.60 microns; Band 2 (visible red) -- 0.63-0.69 microns; Band 3N (short-range infrared) -- 0.78-0.86 microns. The spatial resolution of these bands is 15 m, and radiometric resolution is 8 bits. Such geometrical and radiometric resolutions provide acceptable accuracy of definition of glaciers. At composition of the computer image in a pseudo-color, the red color was correlated with the band1, the green with the band 2 and the dark blue with the band 3N. Such selection of the bands gives the best combination of colors for recognition of the glaciers. According to data for 2001 the aggregate area of the glaciers of Gissar-Alai study region amounted to 482.5 km2. In 1980 and 1957 years the aggregate area of the glaciers of these basins was 511.4 and 572.0 km2, accordingly. In spite of global climate warming which occurs from the middle of 20 century and till the present time, there is a fact that for period from 1980 to 2001 years the mean annual rates of degradation of the glaciation are, approximately, on two times lower than for the period from 1957 to 1980 years, 0.27 % per a year and 0.46 % per a year, accordingly. The prevalent climatic situation in the second half of 20 century appears extremely unfavorable for existence of glaciation of the Gissar-Alai and as a whole for the Pamir--Alai. For last 45 years the glaciers of the study river basins lost about 16 % of the initial area.

  13. Quality Characterization of Silicon Bricks using Photoluminescence Imaging and Photoconductive Decay: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, S.; Yan, F.; Zaunbrecher, K.

    2012-06-01

    Imaging techniques can be applied to multicrystalline silicon solar cells throughout the production process, which includes as early as when the bricks are cut from the cast ingot. Photoluminescence (PL) imaging of the band-to-band radiative recombination is used to characterize silicon quality and defects regions within the brick. PL images of the brick surfaces are compared to minority-carrier lifetimes measured by resonant-coupled photoconductive decay (RCPCD). Photoluminescence images on silicon bricks can be correlated to lifetime measured by photoconductive decay and could be used for high-resolution characterization of material before wafers are cut. The RCPCD technique has shown the longest lifetimesmore » of any of the lifetime measurement techniques we have applied to the bricks. RCPCD benefits from the low-frequency and long-excitation wavelengths used. In addition, RCPCD is a transient technique that directly monitors the decay rate of photoconductivity and does not rely on models or calculations for lifetime. The measured lifetimes over brick surfaces have shown strong correlations to the PL image intensities; therefore, this correlation could then be used to transform the PL image into a high-resolution lifetime map.« less

  14. Using spectral imaging for the analysis of abnormalities for colorectal cancer: When is it helpful?

    PubMed

    Awan, Ruqayya; Al-Maadeed, Somaya; Al-Saady, Rafif

    2018-01-01

    The spectral imaging technique has been shown to provide more discriminative information than the RGB images and has been proposed for a range of problems. There are many studies demonstrating its potential for the analysis of histopathology images for abnormality detection but there have been discrepancies among previous studies as well. Many multispectral based methods have been proposed for histopathology images but the significance of the use of whole multispectral cube versus a subset of bands or a single band is still arguable. We performed comprehensive analysis using individual bands and different subsets of bands to determine the effectiveness of spectral information for determining the anomaly in colorectal images. Our multispectral colorectal dataset consists of four classes, each represented by infra-red spectrum bands in addition to the visual spectrum bands. We performed our analysis of spectral imaging by stratifying the abnormalities using both spatial and spectral information. For our experiments, we used a combination of texture descriptors with an ensemble classification approach that performed best on our dataset. We applied our method to another dataset and got comparable results with those obtained using the state-of-the-art method and convolutional neural network based method. Moreover, we explored the relationship of the number of bands with the problem complexity and found that higher number of bands is required for a complex task to achieve improved performance. Our results demonstrate a synergy between infra-red and visual spectrum by improving the classification accuracy (by 6%) on incorporating the infra-red representation. We also highlight the importance of how the dataset should be divided into training and testing set for evaluating the histopathology image-based approaches, which has not been considered in previous studies on multispectral histopathology images.

  15. Using spectral imaging for the analysis of abnormalities for colorectal cancer: When is it helpful?

    PubMed Central

    Al-Maadeed, Somaya; Al-Saady, Rafif

    2018-01-01

    The spectral imaging technique has been shown to provide more discriminative information than the RGB images and has been proposed for a range of problems. There are many studies demonstrating its potential for the analysis of histopathology images for abnormality detection but there have been discrepancies among previous studies as well. Many multispectral based methods have been proposed for histopathology images but the significance of the use of whole multispectral cube versus a subset of bands or a single band is still arguable. We performed comprehensive analysis using individual bands and different subsets of bands to determine the effectiveness of spectral information for determining the anomaly in colorectal images. Our multispectral colorectal dataset consists of four classes, each represented by infra-red spectrum bands in addition to the visual spectrum bands. We performed our analysis of spectral imaging by stratifying the abnormalities using both spatial and spectral information. For our experiments, we used a combination of texture descriptors with an ensemble classification approach that performed best on our dataset. We applied our method to another dataset and got comparable results with those obtained using the state-of-the-art method and convolutional neural network based method. Moreover, we explored the relationship of the number of bands with the problem complexity and found that higher number of bands is required for a complex task to achieve improved performance. Our results demonstrate a synergy between infra-red and visual spectrum by improving the classification accuracy (by 6%) on incorporating the infra-red representation. We also highlight the importance of how the dataset should be divided into training and testing set for evaluating the histopathology image-based approaches, which has not been considered in previous studies on multispectral histopathology images. PMID:29874262

  16. Space Radar Image of Long Valley, California -Interferometry/Topography

    NASA Image and Video Library

    1999-05-01

    These four images of the Long Valley region of east-central California illustrate the steps required to produced three dimensional data and topographics maps from radar interferometry. All data displayed in these images were acquired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the space shuttle Endeavour during its two flights in April and October, 1994. The image in the upper left shows L-band (horizontally transmitted and received) SIR-C radar image data for an area 34 by 59 kilometers (21 by 37 miles). North is toward the upper right; the radar illumination is from the top of the image. The bright areas are hilly regions that contain exposed bedrock and pine forest. The darker gray areas are the relatively smooth, sparsely vegetated valley floors. The dark irregular patch near the lower left is Lake Crowley. The curving ridge that runs across the center of the image from top to bottom is the northeast rim of the Long Valley Caldera, a remnant crater from a massive volcanic eruption that occurred about 750,000 years ago. The image in the upper right is an interferogram of the same area, made by combining SIR-C L-band data from the April and October flights. The colors in this image represent the difference in the phase of the radar echoes obtained on the two flights. Variations in the phase difference are caused by elevation differences. Formation of continuous bands of phase differences, known as interferometric "fringes," is only possible if the two observations were acquired from nearly the same position in space. For these April and October data takes, the shuttle tracks were less than 100 meters (328 feet) apart. The image in the lower left shows a topographic map derived from the interferometric data. The colors represent increments of elevation, as do the thin black contour lines, which are spaced at 50-meter (164-foot) elevation intervals. Heavy contour lines show 250-meter intervals (820-foot). Total relief in this area is about 1,320 meters (4,330 feet). Brightness variations come from the radar image, which has been geometrically corrected to remove radar distortions and rotated to have north toward the top. The image in the lower right is a three-dimensional perspective view of the northeast rim of the Long Valley caldera, looking toward the northwest. SIR-C C-band radar image data are draped over topographic data derived from the interferometry processing. No vertical exaggeration has been applied. Combining topographic and radar image data allows scientists to examine relationships between geologic structures and landforms, and other properties of the land cover, such as soil type, vegetation distribution and hydrologic characteristics. http://photojournal.jpl.nasa.gov/catalog/PIA01770

  17. Subcutaneous Fascial Bands—A Qualitative and Morphometric Analysis

    PubMed Central

    Li, Weihui; Ahn, Andrew C.

    2011-01-01

    Background Although fascial bands within the subcutaneous (SQ) layer are commonly seen in ultrasound images, little is known about their functional role, much less their structural characteristics. This study's objective is to describe the morphological features of SQ fascial bands and to systematically evaluate the bands using image analyses tools and morphometric measures. Methods In 28 healthy volunteers, ultrasound images were obtained at three body locations: the lateral aspect of the upper arm, medial aspect of the thigh and posterior aspect of lower leg. Using image analytical techniques, the total SQ band area, fascial band number, fascial band thickness, and SQ zone (layer) thickness were determined. In addition, the SQ spatial coherence was calculated based on the eigenvalues associated with the largest and smallest eigenvectors of the images. Results Fascial bands at these sites were contiguous with the dermis and the epimysium forming an interconnected network within the subcutaneous tissue. Subcutaneous blood vessels were also frequently encased by these fascial bands. The total SQ fascial band area was greater at the thigh and calf compared to the arm and was unrelated to SQ layer (zone) thickness. The thigh was associated with highest average number of fascial bands while calf was associated with the greatest average fascial band thickness. Across body regions, greater SQ zone thickness was associated with thinner fascial bands. SQ coherence was significantly associated with SQ zone thickness and body location (calf with statistically greater coherence compared to arm). Conclusion Fascial bands are structural bridges that mechanically link the skin, subcutaneous layer, and deeper muscle layers. This cohesive network also encases subcutaneous vessels and may indirectly mediate blood flow. The quantity and morphological characteristics of the SQ fascial band may reflect the composite mechanical forces experienced by the body part. PMID:21931632

  18. Fast algorithm for spectral mixture analysis of imaging spectrometer data

    NASA Astrophysics Data System (ADS)

    Schouten, Theo E.; Klein Gebbinck, Maurice S.; Liu, Z. K.; Chen, Shaowei

    1996-12-01

    Imaging spectrometers acquire images in many narrow spectral bands but have limited spatial resolution. Spectral mixture analysis (SMA) is used to determine the fractions of the ground cover categories (the end-members) present in each pixel. In this paper a new iterative SMA method is presented and tested using a 30 band MAIS image. The time needed for each iteration is independent of the number of bands, thus the method can be used for spectrometers with a large number of bands. Further a new method, based on K-means clustering, for obtaining endmembers from image data is described and compared with existing methods. Using the developed methods the available MAIS image was analyzed using 2 to 6 endmembers.

  19. Digital radiography: optimization of image quality and dose using multi-frequency software.

    PubMed

    Precht, H; Gerke, O; Rosendahl, K; Tingberg, A; Waaler, D

    2012-09-01

    New developments in processing of digital radiographs (DR), including multi-frequency processing (MFP), allow optimization of image quality and radiation dose. This is particularly promising in children as they are believed to be more sensitive to ionizing radiation than adults. To examine whether the use of MFP software reduces the radiation dose without compromising quality at DR of the femur in 5-year-old-equivalent anthropomorphic and technical phantoms. A total of 110 images of an anthropomorphic phantom were imaged on a DR system (Canon DR with CXDI-50 C detector and MLT[S] software) and analyzed by three pediatric radiologists using Visual Grading Analysis. In addition, 3,500 images taken of a technical contrast-detail phantom (CDRAD 2.0) provide an objective image-quality assessment. Optimal image-quality was maintained at a dose reduction of 61% with MLT(S) optimized images. Even for images of diagnostic quality, MLT(S) provided a dose reduction of 88% as compared to the reference image. Software impact on image quality was found significant for dose (mAs), dynamic range dark region and frequency band. By optimizing image processing parameters, a significant dose reduction is possible without significant loss of image quality.

  20. Ultra-wide-band 3D microwave imaging scanner for the detection of concealed weapons

    NASA Astrophysics Data System (ADS)

    Rezgui, Nacer-Ddine; Andrews, David A.; Bowring, Nicholas J.

    2015-10-01

    The threat of concealed weapons, explosives and contraband in footwear, bags and suitcases has led to the development of new devices, which can be deployed for security screening. To address known deficiencies of metal detectors and x-rays, an UWB 3D microwave imaging scanning apparatus using FMCW stepped frequency working in the K and Q bands and with a planar scanning geometry based on an x y stage, has been developed to screen suspicious luggage and footwear. To obtain microwave images of the concealed weapons, the targets are placed above the platform and the single transceiver horn antenna attached to the x y stage is moved mechanically to perform a raster scan to create a 2D synthetic aperture array. The S11 reflection signal of the transmitted sweep frequency from the target is acquired by a VNA in synchronism with each position step. To enhance and filter from clutter and noise the raw data and to obtain the 2D and 3D microwave images of the concealed weapons or explosives, data processing techniques are applied to the acquired signals. These techniques include background subtraction, Inverse Fast Fourier Transform (IFFT), thresholding, filtering by gating and windowing and deconvolving with the transfer function of the system using a reference target. To focus the 3D reconstructed microwave image of the target in range and across the x y aperture without using focusing elements, 3D Synthetic Aperture Radar (SAR) techniques are applied to the post-processed data. The K and Q bands, between 15 to 40 GHz, show good transmission through clothing and dielectric materials found in luggage and footwear. A description of the system, algorithms and some results with replica guns and a comparison of microwave images obtained by IFFT, 2D and 3D SAR techniques are presented.

  1. A Multi-Frequency Wide-Swath Spaceborne Cloud and Precipitation Imaging Radar

    NASA Technical Reports Server (NTRS)

    Li, Lihua; Racette, Paul; Heymsfield, Gary; McLinden, Matthew; Venkatesh, Vijay; Coon, Michael; Perrine, Martin; Park, Richard; Cooley, Michael; Stenger, Pete; hide

    2016-01-01

    Microwave and millimeter-wave radars have proven their effectiveness in cloud and precipitation observations. The NASA Earth Science Decadal Survey (DS) Aerosol, Cloud and Ecosystems (ACE) mission calls for a dual-frequency cloud radar (W band 94 GHz and Ka-band 35 GHz) for global measurements of cloud microphysical properties. Recently, there have been discussions of utilizing a tri-frequency (KuKaW-band) radar for a combined ACE and Global Precipitation Measurement (GPM) follow-on mission that has evolved into the Cloud and Precipitation Process Mission (CaPPM) concept. In this presentation we will give an overview of the technology development efforts at the NASA Goddard Space Flight Center (GSFC) and at Northrop Grumman Electronic Systems (NGES) through projects funded by the NASA Earth Science Technology Office (ESTO) Instrument Incubator Program (IIP). Our primary objective of this research is to advance the key enabling technologies for a tri-frequency (KuKaW-band) shared-aperture spaceborne imaging radar to provide unprecedented, simultaneous multi-frequency measurements that will enhance understanding of the effects of clouds and precipitation and their interaction on Earth climate change. Research effort has been focused on concept design and trade studies of the tri-frequency radar; investigating architectures that provide tri-band shared-aperture capability; advancing the development of the Ka band active electronically scanned array (AESA) transmitreceive (TR) module, and development of the advanced radar backend electronics.

  2. Hyperspectral imaging for differentiation of foreign materials from pinto beans

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Zemlan, Michael; Henry, Sam

    2015-09-01

    Food safety and quality in packaged products are paramount in the food processing industry. To ensure that packaged products are free of foreign materials, such as debris and pests, unwanted materials mixed with the targeted products must be detected before packaging. A portable hyperspectral imaging system in the visible-to-NIR range has been used to acquire hyperspectral data cubes from pinto beans that have been mixed with foreign matter. Bands and band ratios have been identified as effective features to develop a classification scheme for detection of foreign materials in pinto beans. A support vector machine has been implemented with a quadratic kernel to separate pinto beans and background (Class 1) from all other materials (Class 2) in each scene. After creating a binary classification map for the scene, further analysis of these binary images allows separation of false positives from true positives for proper removal action during packaging.

  3. MARs Color Imager (MARCI) Daily Global Ozone Column Mapping from the Mars Reconnaissance Orbiter (MRO): A Survey of 2006-2010 Results

    NASA Astrophysics Data System (ADS)

    Clancy, R. T.; Wolff, M. J.; Malin, M. C.; Cantor, B. A.

    2010-12-01

    MARCI UV band imaging photometry within (260nm) and outside (320nm) the Hartley ozone band absorption supports daily global mapping of Mars ozone column abundances. Key retrieval issues include accurate UV radiometric calibrations, detailed specifications of surface and atmospheric background reflectance (surface albedo, atmospheric Raleigh and dust scattering/absorption), and simultaneous cloud retrievals. The implementation of accurate radiative transfer (RT) treatments of these processes has been accomplished (Wolff et al., 2010) such that daily global mapping retrievals for Mars ozone columns have been completed for the 2006-2010 period of MARCI global imaging. Ozone retrievals are most accurate for high column abundances associated with mid-to-high latitude regions during fall, winter, and spring seasons. We present a survey of these MARCI ozone column retrievals versus season, latitude, longitude, and year.

  4. A Full Snow Season in Yellowstone: A Database of Restored Aqua Band 6

    NASA Technical Reports Server (NTRS)

    Gladkova, Irina; Grossberg, Michael; Bonev, George; Romanov, Peter; Riggs, George; Hall, Dorothy

    2013-01-01

    The algorithms for estimating snow extent for the Moderate Resolution Imaging Spectroradiometer (MODIS) optimally use the 1.6- m channel which is unavailable for MODIS on Aqua due to detector damage. As a test bed to demonstrate that Aqua band 6 can be restored, we chose the area surrounding Yellowstone and Grand Teton national parks. In such rugged and difficult-to-access terrain, satellite images are particularly important for providing an estimation of snow-cover extent. For the full 2010-2011 snow season covering the Yellowstone region, we have used quantitative image restoration to create a database of restored Aqua band 6. The database includes restored radiances, normalized vegetation index, normalized snow index, thermal data, and band-6-based snow-map products. The restored Aqua-band-6 data have also been regridded and combined with Terra data to produce a snow-cover map that utilizes both Terra and Aqua snow maps. Using this database, we show that the restored Aqua-band-6-based snow-cover extent has a comparable performance with respect to ground stations to the one based on Terra. The result of a restored band 6 from Aqua is that we have an additional band-6 image of the Yellowstone region each day. This image can be used to mitigate cloud occlusion, using the same algorithms used for band 6 on Terra. We show an application of this database of restored band-6 images to illustrate the value of creating a cloud gap filling using the National Aeronautics and Space Administration s operational cloud masks and data from both Aqua and Terra.

  5. Space Radar Image of Rabaul Volcano, New Guinea

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a radar image of the Rabaul volcano on the island of New Britain, Papua, New Guinea taken almost a month after its September 19, 1994, eruption that killed five people and covered the town of Rabaul and nearby villages with up to 75 centimeters (30 inches) of ash. More than 53,000 people have been displaced by the eruption. The image was acquired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the space shuttle Endeavour on its 173rd orbit on October 11, 1994. This image is centered at 4.2 degrees south latitude and 152.2 degrees east longitude in the southwest Pacific Ocean. The area shown is approximately 21 kilometers by 25 kilometers (13 miles by 15.5 miles). North is toward the upper right. The colors in this image were obtained using the following radar channels: red represents the L-band (horizontally transmitted and received); green represents the L-band (horizontally transmitted and vertically received); blue represents the C-band (horizontally transmitted and vertically received). Most of the Rabaul volcano is underwater and the caldera (crater) creates Blanche Bay, the semi-circular body of water that occupies most of the center of the image. Volcanic vents within the caldera are visible in the image and include Vulcan, on a peninsula on the west side of the bay, and Rabalanakaia and Tavurvur (the circular purple feature near the mouth of the bay) on the east side. Both Vulcan and Tavurvur were active during the 1994 eruption. Ash deposits appear red-orange on the image, and are most prominent on the south flanks of Vulcan and north and northwest of Tavurvur. A faint blue patch in the water in the center of the image is a large raft of floating pumice fragments that were ejected from Vulcan during the eruption and clog the inner bay. Visible on the east side of the bay are the grid-like patterns of the streets of Rabaul and an airstrip, which appears as a dark northwest-trending band at the right-center of the image. Ashfall and subsequent rains caused the collapse of most buildings in the town of Rabaul. Mudflows and flooding continue to pose serious threats to the town and surrounding villages. Volcanologists and local authorities expect to use data such as this radar image to assist them in identifying the mechanisms of the eruption and future hazardous conditions that may be associated with the vigorously active volcano. Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.v.(DLR), the major partner in science, operations and data processing of X-SAR.

  6. The fabrication of a multi-spectral lens array and its application in assisting color blindness

    NASA Astrophysics Data System (ADS)

    Di, Si; Jin, Jian; Tang, Guanrong; Chen, Xianshuai; Du, Ruxu

    2016-01-01

    This article presents a compact multi-spectral lens array and describes its application in assisting color-blindness. The lens array consists of 9 microlens, and each microlens is coated with a different color filter. Thus, it can capture different light bands, including red, orange, yellow, green, cyan, blue, violet, near-infrared, and the entire visible band. First, the fabrication process is described in detail. Second, an imaging system is setup and a color blindness testing card is selected as the sample. By the system, the vision results of normal people and color blindness can be captured simultaneously. Based on the imaging results, it is possible to be used for helping color-blindness to recover normal vision.

  7. City of Flagstaff Project: Ground Water Resource Evaluation, Remote Sensing Component

    USGS Publications Warehouse

    Chavez, Pat S.; Velasco, Miguel G.; Bowell, Jo-Ann; Sides, Stuart C.; Gonzalez, Rosendo R.; Soltesz, Deborah L.

    1996-01-01

    Many regions, cities, and towns in the Western United States need new or expanded water resources because of both population growth and increased development. Any tools or data that can help in the evaluation of an area's potential water resources must be considered for this increasingly critical need. Remotely sensed satellite images and subsequent digital image processing have been under-utilized in ground water resource evaluation and exploration. Satellite images can be helpful in detecting and mapping an area's regional structural patterns, including major fracture and fault systems, two important geologic settings for an area's surface to ground water relations. Within the United States Geological Survey's (USGS) Flagstaff Field Center, expertise and capabilities in remote sensing and digital image processing have been developed over the past 25 years through various programs. For the City of Flagstaff project, this expertise and these capabilities were combined with traditional geologic field mapping to help evaluate ground water resources in the Flagstaff area. Various enhancement and manipulation procedures were applied to the digital satellite images; the results, in both digital and hardcopy format, were used for field mapping and analyzing the regional structure. Relative to surface sampling, remotely sensed satellite and airborne images have improved spatial coverage that can help study, map, and monitor the earth surface at local and/or regional scales. Advantages offered by remotely sensed satellite image data include: 1. a synoptic/regional view compared to both aerial photographs and ground sampling, 2. cost effectiveness, 3. high spatial resolution and coverage compared to ground sampling, and 4. relatively high temporal coverage on a long term basis. Remotely sensed images contain both spectral and spatial information. The spectral information provides various properties and characteristics about the surface cover at a given location or pixel (that is, vegetation and/or soil type). The spatial information gives the distribution, variation, and topographic relief of the cover types from pixel to pixel. Therefore, the main characteristics that determine a pixel's brightness/reflectance and, consequently, the digital number (DN) assigned to the pixel, are the physical properties of the surface and near surface, the cover type, and the topographic slope. In this application, the ability to detect and map lineaments, especially those related to fractures and faults, is critical. Therefore, the extraction of spatial information from the digital images was of prime interest in this project. The spatial information varies among the different spectral bands available; in particular, a near infrared spectral band is better than a visible band when extracting spatial information in highly vegetated areas. In this study, both visible and near infrared bands were analyzed and used to extract the desired spatial information from the images. The wide swath coverage of remotely sensed satellite digital images makes them ideal for regional analysis and mapping. Since locating and mapping highly fractured and faulted areas is a major requirement for ground water resource evaluation and exploration this aspect of satellite images was considered critical; it allowed us to stand back (actually up about 440 miles), look at, and map the regional structural setting of the area. The main focus of the remote sensing and digital image processing component of this project was to use both remotely sensed digital satellite images and a Digital Elevation Model (DEM) to extract spatial information related to the structural and topographic patterns in the area. The data types used were digital satellite images collected by the United States' Landsat Thematic Mapper (TM) and French Systeme Probatoire d'Observation de laTerre (SPOT) imaging systems, along with a DEM of the Flagstaff region. The USGS Mini Image Processing Sy

  8. Automated oil spill detection with multispectral imagery

    NASA Astrophysics Data System (ADS)

    Bradford, Brian N.; Sanchez-Reyes, Pedro J.

    2011-06-01

    In this publication we present an automated detection method for ocean surface oil, like that which existed in the Gulf of Mexico as a result of the April 20, 2010 Deepwater Horizon drilling rig explosion. Regions of surface oil in airborne imagery are isolated using red, green, and blue bands from multispectral data sets. The oil shape isolation procedure involves a series of image processing functions to draw out the visual phenomenological features of the surface oil. These functions include selective color band combinations, contrast enhancement and histogram warping. An image segmentation process then separates out contiguous regions of oil to provide a raster mask to an analyst. We automate the detection algorithm to allow large volumes of data to be processed in a short time period, which can provide timely oil coverage statistics to response crews. Geo-referenced and mosaicked data sets enable the largest identified oil regions to be mapped to exact geographic coordinates. In our simulation, multispectral imagery came from multiple sources including first-hand data collected from the Gulf. Results of the simulation show the oil spill coverage area as a raster mask, along with histogram statistics of the oil pixels. A rough square footage estimate of the coverage is reported if the image ground sample distance is available.

  9. Fast and robust wavelet-based dynamic range compression and contrast enhancement model with color restoration

    NASA Astrophysics Data System (ADS)

    Unaldi, Numan; Asari, Vijayan K.; Rahman, Zia-ur

    2009-05-01

    Recently we proposed a wavelet-based dynamic range compression algorithm to improve the visual quality of digital images captured from high dynamic range scenes with non-uniform lighting conditions. The fast image enhancement algorithm that provides dynamic range compression, while preserving the local contrast and tonal rendition, is also a good candidate for real time video processing applications. Although the colors of the enhanced images produced by the proposed algorithm are consistent with the colors of the original image, the proposed algorithm fails to produce color constant results for some "pathological" scenes that have very strong spectral characteristics in a single band. The linear color restoration process is the main reason for this drawback. Hence, a different approach is required for the final color restoration process. In this paper the latest version of the proposed algorithm, which deals with this issue is presented. The results obtained by applying the algorithm to numerous natural images show strong robustness and high image quality.

  10. Automated coregistration of MTI spectral bands

    NASA Astrophysics Data System (ADS)

    Theiler, James P.; Galbraith, Amy E.; Pope, Paul A.; Ramsey, Keri A.; Szymanski, John J.

    2002-08-01

    In the focal plane of a pushbroom imager, a linear array of pixels is scanned across the scene, building up the image one row at a time. For the Multispectral Thermal Imager (MTI), each of fifteen different spectral bands has its own linear array. These arrays are pushed across the scene together, but since each band's array is at a different position on the focal plane, a separate image is produced for each band. The standard MTI data products (LEVEL1B_R_COREG and LEVEL1B_R_GEO) resample these separate images to a common grid and produce coregistered multispectral image cubes. The coregistration software employs a direct ``dead reckoning' approach. Every pixel in the calibrated image is mapped to an absolute position on the surface of the earth, and these are resampled to produce an undistorted coregistered image of the scene. To do this requires extensive information regarding the satellite position and pointing as a function of time, the precise configuration of the focal plane, and the distortion due to the optics. These must be combined with knowledge about the position and altitude of the target on the rotating ellipsoidal earth. We will discuss the direct approach to MTI coregistration, as well as more recent attempts to tweak the precision of the band-to-band registration using correlations in the imagery itself.

  11. Detection of microbial biofilms on food processing surfaces: hyperspectral fluorescence imaging study

    NASA Astrophysics Data System (ADS)

    Jun, Won; Kim, Moon S.; Chao, Kaunglin; Lefcourt, Alan M.; Roberts, Michael S.; McNaughton, James L.

    2009-05-01

    We used a portable hyperspectral fluorescence imaging system to evaluate biofilm formations on four types of food processing surface materials including stainless steel, polypropylene used for cutting boards, and household counter top materials such as formica and granite. The objective of this investigation was to determine a minimal number of spectral bands suitable to differentiate microbial biofilm formation from the four background materials typically used during food processing. Ultimately, the resultant spectral information will be used in development of handheld portable imaging devices that can be used as visual aid tools for sanitation and safety inspection (microbial contamination) of the food processing surfaces. Pathogenic E. coli O157:H7 and Salmonella cells were grown in low strength M9 minimal medium on various surfaces at 22 +/- 2 °C for 2 days for biofilm formation. Biofilm autofluorescence under UV excitation (320 to 400 nm) obtained by hyperspectral fluorescence imaging system showed broad emissions in the blue-green regions of the spectrum with emission maxima at approximately 480 nm for both E. coli O157:H7 and Salmonella biofilms. Fluorescence images at 480 nm revealed that for background materials with near-uniform fluorescence responses such as stainless steel and formica cutting board, regardless of the background intensity, biofilm formation can be distinguished. This suggested that a broad spectral band in the blue-green regions can be used for handheld imaging devices for sanitation inspection of stainless, cutting board, and formica surfaces. The non-uniform fluorescence responses of granite make distinctions between biofilm and background difficult. To further investigate potential detection of the biofilm formations on granite surfaces with multispectral approaches, principal component analysis (PCA) was performed using the hyperspectral fluorescence image data. The resultant PCA score images revealed distinct contrast between biofilms and granite surfaces. This investigation demonstrated that biofilm formations on food processing surfaces, even for background materials with heterogeneous fluorescence responses, can be detected. Furthermore, a multispectral approach in developing handheld inspection devices may be needed to inspect surface materials that exhibit non-uniform fluorescence.

  12. New Multispectral Cloud Retrievals from MODIS

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Tsay, Si-Chee; Ackerman, Steven A.; Gray, Mark A.; Moody, Eric G.; Li, Jason Y.; Arnold, G. T.; King, Michael D. (Technical Monitor)

    2000-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) was developed by NASA and launched onboard the Terra spacecraft on December 18, 1999. It achieved its final orbit and began Earth observations on February 24, 2000. MODIS scans a swath width sufficient to provide nearly complete global coverage every two days from a polar-orbiting, sun-synchronous, platform at an altitude of 705 km, and provides images in 36 spectral bands between 0.415 and 14.235 micrometers with spatial resolutions of 250 m (2 bands), 500 m (5 bands) and 1000 m (29 bands). These bands have been carefully selected to enable advanced studies of land, ocean, and atmospheric processes. In this paper I will describe the various methods being used for the remote sensing of cloud properties using MODIS data, focusing primarily on the MODIS cloud mask used to distinguish clouds, clear sky, heavy aerosol, and shadows on the ground, and on the remote sensing of cloud optical properties, especially cloud optical thickness and effective radius of cloud drops and ice crystals. Results will be presented of MODIS cloud properties both over the land and over the ocean, showing the consistency in cloud retrievals over various ecosystems used in the retrievals. The implications of this new observing system on global analysis of the Earth's environment will be discussed.

  13. A Multi-Sensor Aerogeophysical Study of Afghanistan

    DTIC Science & Technology

    2007-01-01

    magnetometer coupled with an Applied Physics 539 3-axis fluxgate mag- netometer for compensation of the aircraft field; • an Applanix DSS 301 digital...survey. DATA COlleCTION AND PROCeSSINg Photogrammetry More than 65,000 high-resolution photogram- metric images were collected using an Applanix Digital...HSI L-Band Polarimetric Imaging Radar KGPS Dual Gravity Meters Common Sensor Bomb-bay Pallet Applanix DSS Camera Sensor Suite • Magnetometer • Gravity

  14. Space Radar Image of Central African Gorilla Habitat

    NASA Image and Video Library

    1999-01-27

    This is a false-color radar image of Central Africa, showing the Virunga Volcano chain along the borders of Rwanda, Zaire and Uganda. This area is home to the endangered mountain gorillas. This C-band L-band image was acquired on April 12, 1994, on orbit 58 of space shuttle Endeavour by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR). The area is centered at about 1.75 degrees south latitude and 29.5 degrees east longitude. The image covers an area 58 kilometers by 178 kilometers (48 miles by 178 miles). The false-color composite is created by displaying the L-band HH return in red, the L-band HV return in green and the C-band HH return in blue. The dark area in the bottom of the image is Lake Kivu, which forms the border between Zaire (to the left) and Rwanda (to the right). The airport at Goma, Zaire is shown as a dark line just above the lake in the bottom left corner of the image. Volcanic flows from the 1977 eruption of Mt. Nyiragongo are shown just north of the airport. Mt. Nyiragongo is not visible in this image because it is located just to the left of the image swath. Very fluid lava flows from the 1977 eruption killed 70 people. http://photojournal.jpl.nasa.gov/catalog/PIA01724

  15. Image processing pipeline for segmentation and material classification based on multispectral high dynamic range polarimetric images.

    PubMed

    Martínez-Domingo, Miguel Ángel; Valero, Eva M; Hernández-Andrés, Javier; Tominaga, Shoji; Horiuchi, Takahiko; Hirai, Keita

    2017-11-27

    We propose a method for the capture of high dynamic range (HDR), multispectral (MS), polarimetric (Pol) images of indoor scenes using a liquid crystal tunable filter (LCTF). We have included the adaptive exposure estimation (AEE) method to fully automatize the capturing process. We also propose a pre-processing method which can be applied for the registration of HDR images after they are already built as the result of combining different low dynamic range (LDR) images. This method is applied to ensure a correct alignment of the different polarization HDR images for each spectral band. We have focused our efforts in two main applications: object segmentation and classification into metal and dielectric classes. We have simplified the segmentation using mean shift combined with cluster averaging and region merging techniques. We compare the performance of our segmentation with that of Ncut and Watershed methods. For the classification task, we propose to use information not only in the highlight regions but also in their surrounding area, extracted from the degree of linear polarization (DoLP) maps. We present experimental results which proof that the proposed image processing pipeline outperforms previous techniques developed specifically for MSHDRPol image cubes.

  16. Visual enhancement of unmixed multispectral imagery using adaptive smoothing

    USGS Publications Warehouse

    Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.

    2004-01-01

    Adaptive smoothing (AS) has been previously proposed as a method to smooth uniform regions of an image, retain contrast edges, and enhance edge boundaries. The method is an implementation of the anisotropic diffusion process which results in a gray scale image. This paper discusses modifications to the AS method for application to multi-band data which results in a color segmented image. The process was used to visually enhance the three most distinct abundance fraction images produced by the Lagrange constraint neural network learning-based unmixing of Landsat 7 Enhanced Thematic Mapper Plus multispectral sensor data. A mutual information-based method was applied to select the three most distinct fraction images for subsequent visualization as a red, green, and blue composite. A reported image restoration technique (partial restoration) was applied to the multispectral data to reduce unmixing error, although evaluation of the performance of this technique was beyond the scope of this paper. The modified smoothing process resulted in a color segmented image with homogeneous regions separated by sharpened, coregistered multiband edges. There was improved class separation with the segmented image, which has importance to subsequent operations involving data classification.

  17. Three frequency false-color image of Prince Albert, Canada

    NASA Image and Video Library

    1994-04-18

    STS059-S-079 (18 April 1994) --- This is a false-color, three frequency image of Prince Albert, Canada, centered at 53.91 north latitude and 104.69 west longitude. It was produced using data from the X-Band, C-Band and L-Band radars that comprise the Spaceborne Imaging Radar-C and X-Band Synthetic Aperture Radar (SIR-C/X-SAR). SIR-C/X-SAR acquired this image on the 20th orbit of the Space Shuttle Endeavour. The area is located 40 kilometers north and 30 kilometers east of the town of Prince Albert in the Saskatchewan province of Canada. The image covers the area east of the Candle Lake, between gravel surface Highways 120 and 106 and west of 106. The area in the middle of the image covers the entire Nipawin (Narrow Hills) provincial park. The look angle of the radar is 30 degrees and the size of the image is approximately 20 by 50 kilometers. The red, green, and blue colors represent L-Band total power, C-Band total power, and XVV respectively. The changes in the intensity of each color are related to various surface conditions such as frozen or thawed forest, fire, deforestation and areas of regrowth. Most of the dark blue areas in the image are the ice covered lakes. The dark area on the top right corner of the image is the White Gull Lake north of the intersection of Highway 120 and 913. The right middle part of the image shows Lake Ispuchaw and Lower Fishing Lake. The deforested areas are shown by light blue in the image. Since most of the logging practice at the Prince Albert area is around the major highways, the deforested areas can be easily detected as small geometrically shaped dark regions along the roads. At the time these data were taken, a major part of the forest was either frozen or undergoing the spring thaw. In such conditions, due to low volume of water in the vegetation, a deeper layer of the canopy is imaged by the radar, revealing valuable information about the type of trees, the amount of vegetation biomass and the condition of the surface. As the frequency increases, the penetration depth in the canopy decreases. Over forest canopies, the X-Band radar contains information about the top of the canopy. Whereas, C-Band and L-Band radar returns show contributions from the crown and trunk areas respectively. The bright areas in the image are dense mixed aspen and old jackpine forests where the return from all three bands is high. The reddish area corresponds to more sparse old jack pine (12 to 17 meters in height and 60 to 75 years old) where the L-Band signal penetrates deeper in the canopy and dominates C-Band and X-Band returns. Comparison of the image with the forest cover map of the area indicates that the three band radar can be used to classify various stands. SIR-C/X-SAR is part of NASA's Mission to Planet Earth (MTPE). SIR-C/X-SAR radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-Band (24 cm), C-Band (6 cm), and X-Band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory (JPL). X-SAR was developed by the Dornire and Alenia Spazio Companies for the German Space Agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian Space Agency, Agenzia Spaziale Italiana (ASI). JPL Photo ID: P-43929

  18. Image segmentation of pyramid style identifier based on Support Vector Machine for colorectal endoscopic images.

    PubMed

    Okamoto, Takumi; Koide, Tetsushi; Sugi, Koki; Shimizu, Tatsuya; Anh-Tuan Hoang; Tamaki, Toru; Raytchev, Bisser; Kaneda, Kazufumi; Kominami, Yoko; Yoshida, Shigeto; Mieno, Hiroshi; Tanaka, Shinji

    2015-08-01

    With the increase of colorectal cancer patients in recent years, the needs of quantitative evaluation of colorectal cancer are increased, and the computer-aided diagnosis (CAD) system which supports doctor's diagnosis is essential. In this paper, a hardware design of type identification module in CAD system for colorectal endoscopic images with narrow band imaging (NBI) magnification is proposed for real-time processing of full high definition image (1920 × 1080 pixel). A pyramid style image segmentation with SVMs for multi-size scan windows, which can be implemented on an FPGA with small circuit area and achieve high accuracy, is proposed for actual complex colorectal endoscopic images.

  19. VizieR Online Data Catalog: Subarcsecond mid-infrared atlas of local AGN (Asmus+, 2014)

    NASA Astrophysics Data System (ADS)

    Asmus, D.; Hoenig, S. F.; Gandhi, P.; Smette, A.; Duschl, W. J.

    2014-03-01

    The Subarcsecond mid-infrared (MIR) atlas of local active galactic nuclei (AGN) is a collection of all available N- and Q-band images obtained at ground-based 8-meter class telescopes with public archives (Gemini/Michelle, Gemini/T-ReCS, Subaru/COMICS, and VLT/VISIR). It includes in total 895 images, of which 60% are perviously unpublished. These correspond to 253 local AGN with a median redshift of 0.016. The atlas contains the uniformly processed and calibrated images and nuclear photometry obtained through Gauss and PSF fitting for all objects and filters. This also includes measurements of the nuclear extensions. In addition, the classifications of extended emission (if present) and derived nuclear monochromatic 12 and 18 micron continuum fluxes are available. Finally, flux ratios with the circumnuclear MIR emission (measured by Spitzer) and total MIR emission of the galaxy (measured by IRAS) are presented. The observations have been taken in the mid-infrared (N-band, 7-13micron, and Q-band, 17-20micron) between 2003-12-02 and 2011-06-15 and cover the whole sky. The objects have redshifts between -0.0001 and 0.3571. (2 data files).

  20. Speckle Noise Reduction in Optical Coherence Tomography Using Two-dimensional Curvelet-based Dictionary Learning.

    PubMed

    Esmaeili, Mahdad; Dehnavi, Alireza Mehri; Rabbani, Hossein; Hajizadeh, Fedra

    2017-01-01

    The process of interpretation of high-speed optical coherence tomography (OCT) images is restricted due to the large speckle noise. To address this problem, this paper proposes a new method using two-dimensional (2D) curvelet-based K-SVD algorithm for speckle noise reduction and contrast enhancement of intra-retinal layers of 2D spectral-domain OCT images. For this purpose, we take curvelet transform of the noisy image. In the next step, noisy sub-bands of different scales and rotations are separately thresholded with an adaptive data-driven thresholding method, then, each thresholded sub-band is denoised based on K-SVD dictionary learning with a variable size initial dictionary dependent on the size of curvelet coefficients' matrix in each sub-band. We also modify each coefficient matrix to enhance intra-retinal layers, with noise suppression at the same time. We demonstrate the ability of the proposed algorithm in speckle noise reduction of 100 publically available OCT B-scans with and without non-neovascular age-related macular degeneration (AMD), and improvement of contrast-to-noise ratio from 1.27 to 5.12 and mean-to-standard deviation ratio from 3.20 to 14.41 are obtained.

  1. Tomographic Imaging of a Forested Area By Airborne Multi-Baseline P-Band SAR.

    PubMed

    Frey, Othmar; Morsdorf, Felix; Meier, Erich

    2008-09-24

    In recent years, various attempts have been undertaken to obtain information about the structure of forested areas from multi-baseline synthetic aperture radar data. Tomographic processing of such data has been demonstrated for airborne L-band data but the quality of the focused tomographic images is limited by several factors. In particular, the common Fourierbased focusing methods are susceptible to irregular and sparse sampling, two problems, that are unavoidable in case of multi-pass, multi-baseline SAR data acquired by an airborne system. In this paper, a tomographic focusing method based on the time-domain back-projection algorithm is proposed, which maintains the geometric relationship between the original sensor positions and the imaged target and is therefore able to cope with irregular sampling without introducing any approximations with respect to the geometry. The tomographic focusing quality is assessed by analysing the impulse response of simulated point targets and an in-scene corner reflector. And, in particular, several tomographic slices of a volume representing a forested area are given. The respective P-band tomographic data set consisting of eleven flight tracks has been acquired by the airborne E-SAR sensor of the German Aerospace Center (DLR).

  2. Development of an ultra wide band microwave radar based footwear scanning system

    NASA Astrophysics Data System (ADS)

    Rezgui, Nacer Ddine; Bowring, Nicholas J.; Andrews, David A.; Harmer, Stuart W.; Southgate, Matthew J.; O'Reilly, Dean

    2013-10-01

    At airports, security screening can cause long delays. In order to speed up screening a solution to avoid passengers removing their shoes to have them X-ray scanned is required. To detect threats or contraband items hidden within the shoe, a method of screening using frequency swept signals between 15 to 40 GHz has been developed, where the scan is carried out whilst the shoes are being worn. Most footwear is transparent to microwaves to some extent in this band. The scans, data processing and interpretation of the 2D image of the cross section of the shoe are completed in a few seconds. Using safe low power UWB radar, scattered signals from the shoe can be observed which are caused by changes in material properties such as cavities, dielectric or metal objects concealed within the shoe. By moving the transmission horn along the length of the shoe a 2D image corresponding to a cross section through the footwear is built up, which can be interpreted by the user, or automatically, to reveal the presence of concealed threat within the shoe. A prototype system with a resolution of 6 mm or less has been developed and results obtained for a wide range of commonly worn footwear, some modified by the inclusion of concealed material. Clear differences between the measured images of modified and unmodified shoes are seen. Procedures for enhancing the image through electronic image synthesis techniques and image processing methods are discussed and preliminary performance data presented.

  3. The applicability of FORMOSAT-2 images to coastal waters/bodies classification

    NASA Astrophysics Data System (ADS)

    Teodoro, Ana; Duarte, Lia; Silva, Pedro

    2015-10-01

    FORMOSAT-2, launched in May 2004, is a Taiwanese satellite developed by the National Space Organization (NSPO) of Taiwan. The Remote Sensing Instrument (RSI) is a high spatial- resolution optical sensor onboard FORMOSAT-2 with a 2 m spatial resolution in the panchromatic (PAN) band and a 8 m spatial resolution in four multispectral (MS) bands from the visible to near-infrared region. The RSI images acquired during the daytime can be used for land cover/use studies, natural and forestry resources, disaster prevention and rescue works. The main objectives of this work were to investigate the application of FORMOSAT-2 data in order to: (1) identify beach patterns; (2) correctly extract a sand spit boundary. Different pixel-based and object-based classification algorithms were applied to four FORMOSAT-2 scenes and the results were compared with the results already obtained in previous works. Analyzing the results obtained, is possible to conclude that the FORMOSAT-2 data are adequate to the correct identification of beach patterns and to an accurately extraction of the sand spit boundary (Douro river estuary, Porto, Portugal). The results obtained were compared with the results already achieved with IKONOS-2 images. In conclusion, this research has demonstrated that the FORMOSAT-2 data and image processing techniques employed are an effective methodology to identify beach patterns and to correctly extract sand spit boundaries. In the future more FORMOSAT-2 images will be processed and will be consider the use of pan sharped images and data mining algorithms.

  4. Improved Band-to-Band Registration Characterization for VIIRS Reflective Solar Bands Based on Lunar Observations

    NASA Technical Reports Server (NTRS)

    Wang, Zhipeng; Xiong, Xiaoxiong; Li, Yonghong

    2015-01-01

    Spectral bands of the Visible Infrared Imaging Radiometer Suite (VIIRS) instrumentaboard the Suomi National Polar-orbiting Partnership (S-NPP) satellite are spatially co-registered.The accuracy of the band-to-band registration (BBR) is one of the key spatial parameters that must becharacterized. Unlike its predecessor, the Moderate Resolution Imaging Spectroradiometer (MODIS), VIIRS has no on-board calibrator specifically designed to perform on-orbit BBR characterization.To circumvent this problem, a BBR characterization method for VIIRS reflective solar bands (RSB) based on regularly-acquired lunar images has been developed. While its results can satisfactorily demonstrate that the long-term stability of the BBR is well within +/- 0.1 moderate resolution bandpixels, undesired seasonal oscillations have been observed in the trending. The oscillations are most obvious between the visiblenear-infrared bands and short-middle wave infrared bands. This paper investigates the oscillations and identifies their cause as the band spectral dependence of the centroid position and the seasonal rotation of the lunar images over calibration events. Accordingly, an improved algorithm is proposed to quantify the rotation and compensate for its impact. After the correction, the seasonal oscillation in the resulting BBR is reduced from up to 0.05 moderate resolution band pixels to around 0.01 moderate resolution band pixels. After removing this spurious seasonal oscillation, the BBR, as well as its long-term drift are well determined.

  5. Overall evaluation of ERTS-1 imagery for cartographic application

    NASA Technical Reports Server (NTRS)

    Colvocoresses, A. P. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Significant scientific conclusions are: (1) Bulk RBV's have internal positional accuracy in the order of 70 meters at ground scale while MSS internal accuracy is in the order of 200 to 300 meters. Both have precision processed images with accuracy within 70 meters. (2) Image quality exhibited by detectability and acutance is better than expected and perhaps twice as good as would be achieved by photographic film of the same resolution. (3) Photometric anomalies (shading) have limited RBV multispectral application, but it is believed that these anomalies can be further reduced. (4) The MSS has exceptionally high photometric fidelity but the matching of scenes taken under different conditions of illumination has not been resolved. (5) MSS bands 6 and 7 have enormous potential for surface water mapping including the correlation of shorelines at various water stages. (6) MSS band 7 demonstrates an actual cloud penetration capability beyond what was expected. It also has delineated cultural features better than the other MSS bands under certain conditions.

  6. Remote sensing of cloud, aerosol and water vapor properties from the Moderate Resolution Imaging Spectrometer (MODIS)

    NASA Technical Reports Server (NTRS)

    King, M. D.

    1992-01-01

    The Moderate Resolution Imaging Spectrometer (MODIS) is an Earth-viewing sensor being developed as a facility instrument for the Earth Observing System (EOS) to be launched in the late 1990s. MODIS consists of two separate instruments that scan a swath width sufficient to provide nearly complete global coverage every two days from a polar-orbiting, Sun-synchronous, platform at an altitude of 705 km. Of primary interest for studies of atmospheric physics is the MODIS-N (nadir) instrument which will provide images in 36 spectral bands between 0.415 and 14.235 micrometers with spatial resoulutions of 250 m (2 bands), 500 m (5 bands) and 1000 m (29 bands). These bands have been carefully selected to enable advanced studies of land, ocean and atmosperhic processes. The intent of this lecture is to describe the current status of MODIS-N and its companion instrument MODIS-T (tilt), a tiltable cross-track scanning radiometer with 32 uniformly spaced channels between 0.410 and 0.875 micrometers, and to describe the physical principles behind the development of MODIS for the remote sensing of atmospheric properties. Primary emphasis will be placed on the main atmospheric applications of determining the optical, microphysical and physical properties of clouds and aerosol particles form spectral-reflection and thermal-emission measurements. In addition to cloud and aerosol properties, MODIS-N will be utilized for the determination of the total precipitable water vapor over land and atmospheric stability. The physical principles behind the determination of each of these atmospheric products will be described herein.

  7. Space Radar Image of Manaus, Brazil

    NASA Technical Reports Server (NTRS)

    1999-01-01

    These two images were created using data from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR). On the left is a false-color image of Manaus, Brazil acquired April 12, 1994, onboard space shuttle Endeavour. In the center of this image is the Solimoes River just west of Manaus before it combines with the Rio Negro to form the Amazon River. The scene is around 8 by 8 kilometers (5 by 5 miles) with north toward the top. The radar image was produced in L-band where red areas correspond to high backscatter at HH polarization, while green areas exhibit high backscatter at HV polarization. Blue areas show low backscatter at VV polarization. The image on the right is a classification map showing the extent of flooding beneath the forest canopy. The classification map was developed by SIR-C/X-SAR science team members at the University of California,Santa Barbara. The map uses the L-HH, L-HV, and L-VV images to classify the radar image into six categories: Red flooded forest Green unflooded tropical rain forest Blue open water, Amazon river Yellow unflooded fields, some floating grasses Gray flooded shrubs Black floating and flooded grasses Data like these help scientists evaluate flood damage on a global scale. Floods are highly episodic and much of the area inundated is often tree-covered. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.v. (DLR), the major partner in science, operations and data processing of X-SAR.

  8. Improved classification accuracy of powdery mildew infection levels of wine grapes by spatial-spectral analysis of hyperspectral images.

    PubMed

    Knauer, Uwe; Matros, Andrea; Petrovic, Tijana; Zanker, Timothy; Scott, Eileen S; Seiffert, Udo

    2017-01-01

    Hyperspectral imaging is an emerging means of assessing plant vitality, stress parameters, nutrition status, and diseases. Extraction of target values from the high-dimensional datasets either relies on pixel-wise processing of the full spectral information, appropriate selection of individual bands, or calculation of spectral indices. Limitations of such approaches are reduced classification accuracy, reduced robustness due to spatial variation of the spectral information across the surface of the objects measured as well as a loss of information intrinsic to band selection and use of spectral indices. In this paper we present an improved spatial-spectral segmentation approach for the analysis of hyperspectral imaging data and its application for the prediction of powdery mildew infection levels (disease severity) of intact Chardonnay grape bunches shortly before veraison. Instead of calculating texture features (spatial features) for the huge number of spectral bands independently, dimensionality reduction by means of Linear Discriminant Analysis (LDA) was applied first to derive a few descriptive image bands. Subsequent classification was based on modified Random Forest classifiers and selective extraction of texture parameters from the integral image representation of the image bands generated. Dimensionality reduction, integral images, and the selective feature extraction led to improved classification accuracies of up to [Formula: see text] for detached berries used as a reference sample (training dataset). Our approach was validated by predicting infection levels for a sample of 30 intact bunches. Classification accuracy improved with the number of decision trees of the Random Forest classifier. These results corresponded with qPCR results. An accuracy of 0.87 was achieved in classification of healthy, infected, and severely diseased bunches. However, discrimination between visually healthy and infected bunches proved to be challenging for a few samples, perhaps due to colonized berries or sparse mycelia hidden within the bunch or airborne conidia on the berries that were detected by qPCR. An advanced approach to hyperspectral image classification based on combined spatial and spectral image features, potentially applicable to many available hyperspectral sensor technologies, has been developed and validated to improve the detection of powdery mildew infection levels of Chardonnay grape bunches. The spatial-spectral approach improved especially the detection of light infection levels compared with pixel-wise spectral data analysis. This approach is expected to improve the speed and accuracy of disease detection once the thresholds for fungal biomass detected by hyperspectral imaging are established; it can also facilitate monitoring in plant phenotyping of grapevine and additional crops.

  9. Hierarchical image coding with diamond-shaped sub-bands

    NASA Technical Reports Server (NTRS)

    Li, Xiaohui; Wang, Jie; Bauer, Peter; Sauer, Ken

    1992-01-01

    We present a sub-band image coding/decoding system using a diamond-shaped pyramid frequency decomposition to more closely match visual sensitivities than conventional rectangular bands. Filter banks are composed of simple, low order IIR components. The coder is especially designed to function in a multiple resolution reconstruction setting, in situations such as variable capacity channels or receivers, where images must be reconstructed without the entire pyramid of sub-bands. We use a nonlinear interpolation technique for lost subbands to compensate for loss of aliasing cancellation.

  10. Color composite C-band and L-band image of Kilauea volcanoe on Hawaii

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This color composite C-band and L-band image of the Kilauea volcano on the Big Island of Hawaii was acuired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperature Radar (SIR-C/X-SAR) flying on the Space Shuttle Endeavour. The city of Hilo can be seen at the top. The image shows the different types of lava flows around the crater Pu'u O'o. Ash deposits which erupted in 1790 from the summit of Kilauea volcano show up as dark in this image, and fine details associated with lava flows which erupted in 1919 and 1974 can be seen to the south of the summit in an area called the Ka'u Desert. Other historic lava flows can also be seen. Highway 11 is the linear feature running from Hilo to the Kilauea volcano. The Jet Propulsion Laboratory alternative photo number is P-43918.

  11. Restoration of color in a remote sensing image and its quality evaluation

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Wang, Zhihe

    2003-09-01

    This paper is focused on the restoration of color remote sensing (including airborne photo). A complete approach is recommended. It propose that two main aspects should be concerned in restoring a remote sensing image, that are restoration of space information, restoration of photometric information. In this proposal, the restoration of space information can be performed by making the modulation transfer function (MTF) as degradation function, in which the MTF is obtained by measuring the edge curve of origin image. The restoration of photometric information can be performed by improved local maximum entropy algorithm. What's more, a valid approach in processing color remote sensing image is recommended. That is splits the color remote sensing image into three monochromatic images which corresponding three visible light bands and synthesizes the three images after being processed separately with psychological color vision restriction. Finally, three novel evaluation variables are obtained based on image restoration to evaluate the image restoration quality in space restoration quality and photometric restoration quality. An evaluation is provided at last.

  12. Breadboard linear array scan imager using LSI solid-state technology

    NASA Technical Reports Server (NTRS)

    Tracy, R. A.; Brennan, J. A.; Frankel, D. G.; Noll, R. E.

    1976-01-01

    The performance of large scale integration photodiode arrays in a linear array scan (pushbroom) breadboard was evaluated for application to multispectral remote sensing of the earth's resources. The technical approach, implementation, and test results of the program are described. Several self scanned linear array visible photodetector focal plane arrays were fabricated and evaluated in an optical bench configuration. A 1728-detector array operating in four bands (0.5 - 1.1 micrometer) was evaluated for noise, spectral response, dynamic range, crosstalk, MTF, noise equivalent irradiance, linearity, and image quality. Other results include image artifact data, temporal characteristics, radiometric accuracy, calibration experience, chip alignment, and array fabrication experience. Special studies and experimentation were included in long array fabrication and real-time image processing for low-cost ground stations, including the use of computer image processing. High quality images were produced and all objectives of the program were attained.

  13. Exploiting Satellite Focal Plane Geometry for Automatic Extraction of Traffic Flow from Single Optical Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Krauß, T.

    2014-11-01

    The focal plane assembly of most pushbroom scanner satellites is built up in a way that different multispectral or multispectral and panchromatic bands are not all acquired exactly at the same time. This effect is due to offsets of some millimeters of the CCD-lines in the focal plane. Exploiting this special configuration allows the detection of objects moving during this small time span. In this paper we present a method for automatic detection and extraction of moving objects - mainly traffic - from single very high resolution optical satellite imagery of different sensors. The sensors investigated are WorldView-2, RapidEye, Pléiades and also the new SkyBox satellites. Different sensors require different approaches for detecting moving objects. Since the objects are mapped on different positions only in different spectral bands also the change of spectral properties have to be taken into account. In case the main distance in the focal plane is between the multispectral and the panchromatic CCD-line like for Pléiades an approach for weighted integration to receive mostly identical images is investigated. Other approaches for RapidEye and WorldView-2 are also shown. From these intermediate bands difference images are calculated and a method for detecting the moving objects from these difference images is proposed. Based on these presented methods images from different sensors are processed and the results are assessed for detection quality - how many moving objects can be detected, how many are missed - and accuracy - how accurate is the derived speed and size of the objects. Finally the results are discussed and an outlook for possible improvements towards operational processing is presented.

  14. A digital-receiver for the MurchisonWidefield Array

    NASA Astrophysics Data System (ADS)

    Prabu, Thiagaraj; Srivani, K. S.; Roshi, D. Anish; Kamini, P. A.; Madhavi, S.; Emrich, David; Crosse, Brian; Williams, Andrew J.; Waterson, Mark; Deshpande, Avinash A.; Shankar, N. Udaya; Subrahmanyan, Ravi; Briggs, Frank H.; Goeke, Robert F.; Tingay, Steven J.; Johnston-Hollitt, Melanie; R, Gopalakrishna M.; Morgan, Edward H.; Pathikulangara, Joseph; Bunton, John D.; Hampson, Grant; Williams, Christopher; Ord, Stephen M.; Wayth, Randall B.; Kumar, Deepak; Morales, Miguel F.; deSouza, Ludi; Kratzenberg, Eric; Pallot, D.; McWhirter, Russell; Hazelton, Bryna J.; Arcus, Wayne; Barnes, David G.; Bernardi, Gianni; Booler, T.; Bowman, Judd D.; Cappallo, Roger J.; Corey, Brian E.; Greenhill, Lincoln J.; Herne, David; Hewitt, Jacqueline N.; Kaplan, David L.; Kasper, Justin C.; Kincaid, Barton B.; Koenig, Ronald; Lonsdale, Colin J.; Lynch, Mervyn J.; Mitchell, Daniel A.; Oberoi, Divya; Remillard, Ronald A.; Rogers, Alan E.; Salah, Joseph E.; Sault, Robert J.; Stevens, Jamie B.; Tremblay, S.; Webster, Rachel L.; Whitney, Alan R.; Wyithe, Stuart B.

    2015-03-01

    An FPGA-based digital-receiver has been developed for a low-frequency imaging radio interferometer, the Murchison Widefield Array (MWA). The MWA, located at the Murchison Radio-astronomy Observatory (MRO) in Western Australia, consists of 128 dual-polarized aperture-array elements (tiles) operating between 80 and 300 MHz, with a total processed bandwidth of 30.72 MHz for each polarization. Radio-frequency signals from the tiles are amplified and band limited using analog signal conditioning units; sampled and channelized by digital-receivers. The signals from eight tiles are processed by a single digital-receiver, thus requiring 16 digital-receivers for the MWA. The main function of the digital-receivers is to digitize the broad-band signals from each tile, channelize them to form the sky-band, and transport it through optical fibers to a centrally located correlator for further processing. The digital-receiver firmware also implements functions to measure the signal power, perform power equalization across the band, detect interference-like events, and invoke diagnostic modes. The digital-receiver is controlled by high-level programs running on a single-board-computer. This paper presents the digital-receiver design, implementation, current status, and plans for future enhancements.

  15. Computer image processing: Geologic applications

    NASA Technical Reports Server (NTRS)

    Abrams, M. J.

    1978-01-01

    Computer image processing of digital data was performed to support several geological studies. The specific goals were to: (1) relate the mineral content to the spectral reflectance of certain geologic materials, (2) determine the influence of environmental factors, such as atmosphere and vegetation, and (3) improve image processing techniques. For detection of spectral differences related to mineralogy, the technique of band ratioing was found to be the most useful. The influence of atmospheric scattering and methods to correct for the scattering were also studied. Two techniques were used to correct for atmospheric effects: (1) dark object subtraction, (2) normalization of use of ground spectral measurements. Of the two, the first technique proved to be the most successful for removing the effects of atmospheric scattering. A digital mosaic was produced from two side-lapping LANDSAT frames. The advantages were that the same enhancement algorithm can be applied to both frames, and there is no seam where the two images are joined.

  16. Lifting Scheme DWT Implementation in a Wireless Vision Sensor Network

    NASA Astrophysics Data System (ADS)

    Ong, Jia Jan; Ang, L.-M.; Seng, K. P.

    This paper presents the practical implementation of a Wireless Visual Sensor Network (WVSN) with DWT processing on the visual nodes. WVSN consists of visual nodes that capture video and transmit to the base-station without processing. Limitation of network bandwidth restrains the implementation of real time video streaming from remote visual nodes through wireless communication. Three layers of DWT filters are implemented to process the captured image from the camera. With having all the wavelet coefficients produced, it is possible just to transmit the low frequency band coefficients and obtain an approximate image at the base-station. This will reduce the amount of power required in transmission. When necessary, transmitting all the wavelet coefficients will produce the full detail of image, which is similar to the image captured at the visual nodes. The visual node combines the CMOS camera, Xilinx Spartan-3L FPGA and wireless ZigBee® network that uses the Ember EM250 chip.

  17. A new approach for fast indexing of hyperspectral image data for knowledge retrieval and mining

    NASA Astrophysics Data System (ADS)

    Clowers, Robert; Dua, Sumeet

    2005-11-01

    Multispectral sensors produce images with a few relatively broad wavelength bands. Hyperspectral remote sensors, on the other hand, collect image data simultaneously in dozens or hundreds of narrow and adjacent spectral bands. These measurements make it possible to derive a continuous spectrum for each image cell, generating an image cube across multiple spectral components. Hyperspectral imaging has sound applications in a variety of areas such as mineral exploration, hazardous waste remediation, mapping habitat, invasive vegetation, eco system monitoring, hazardous gas detection, mineral detection, soil degradation, and climate change. This image has a strong potential for transforming the imaging paradigms associated with several design and manufacturing processes. In this paper, we describe a novel approach for fast indexing of multi-dimensional hyperspectral image data, especially for data mining applications. The index exploits the spectral and spatial relationships embedded in these image sets. The index will be employed for knowledge retrieval applications that require fast information interpretation approaches. The index can also be deployed in real-time mission-critical domains, as it is shown to exhibit speed with high degrees of dimensionality associated with the data. The strength of this index in terms of degree of false dismissals and false alarms will also be demonstrated. The paper will highlight some common applications of this imaging computational paradigm and will conclude with directions for future improvement and investigation.

  18. Standoff concealed weapon detection using a 350-GHz radar imaging system

    NASA Astrophysics Data System (ADS)

    Sheen, David M.; Hall, Thomas E.; Severtsen, Ronald H.; McMakin, Douglas L.; Hatchell, Brian K.; Valdez, Patrick L. J.

    2010-04-01

    The sub-millimeter (sub-mm) wave frequency band from 300 - 1000 GHz is currently being developed for standoff concealed weapon detection imaging applications. This frequency band is of interest due to the unique combination of high resolution and clothing penetration. The Pacific Northwest National Laboratory (PNNL) is currently developing a 350 GHz, active, wideband, three-dimensional, radar imaging system to evaluate the feasibility of active sub-mm imaging for standoff detection. Standoff concealed weapon and explosive detection is a pressing national and international need for both civilian and military security, as it may allow screening at safer distances than portal screening techniques. PNNL has developed a prototype active wideband 350 GHz radar imaging system based on a wideband, heterodyne, frequency-multiplier-based transceiver system coupled to a quasi-optical focusing system and high-speed rotating conical scanner. This prototype system operates at ranges up to 10+ meters, and can acquire an image in 10 - 20 seconds, which is fast enough to scan cooperative personnel for concealed weapons. The wideband operation of this system provides accurate ranging information, and the images obtained are fully three-dimensional. During the past year, several improvements to the system have been designed and implemented, including increased imaging speed using improved balancing techniques, wider bandwidth, and improved image processing techniques. In this paper, the imaging system is described in detail and numerous imaging results are presented.

  19. BOREAS RSS-14 Level-1 GOES-8 Visible, IR and Water Vapor Images

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Faysash, David; Cooper, Harry J.; Smith, Eric A.; Newcomer, Jeffrey A.

    2000-01-01

    The BOREAS RSS-14 team collected and processed several GOES-7 and GOES-8 image data sets that covered the BOREAS study region. The level-1 BOREAS GOES-8 images are raw data values collected by RSS-14 personnel at FSU and delivered to BORIS. The data cover 14-Jul-1995 to 21-Sep-1995 and 01-Jan-1996 to 03-Oct-1996. The data start out containing three 8-bit spectral bands and end up containing five 10-bit spectral bands. No major problems with the data have been identified. The data are contained in binary image format files. Due to the large size of the images, the level-1 GOES-8 data are not contained on the BOREAS CD-ROM set. An inventory listing file is supplied on the CD-ROM to inform users of what data were collected. The level-1 GOES-8 image data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). See sections 15 and 16 for more information. The data files are available on a CD-ROM (see document number 20010000884).

  20. Preliminary study of the Suomi NPP VIIRS detector-level spectral response function effects for the long-wave infrared bands M15 and M16

    NASA Astrophysics Data System (ADS)

    Padula, Francis; Cao, Changyong

    2014-09-01

    The Suomi NPP Visible Infrared Imaging Radiometer Suite (VIIRS) Sea Surface Temperature (SST) Environmental Data Record (EDR) team observed an anomalous striping pattern in the SST data. To assess possible causes due to the detector-level Spectral Response Functions (SRFs), a study was conducted to compare the radiometric response of the detector-level and operation band averaged SRFs of VIIRS bands M15 & M16 using simulated blackbody radiance data and clear-sky ocean radiances under different atmospheric conditions. It was concluded that the SST product is likely impacted by small differences in detector-level SRFs, and that if users require optimal system performance detector-level processing is recommended. Future work will investigate potential SDR product improvements through detector-level processing in support of the generation of Suomi NPP VIIRS climate quality SDRs.

  1. Development of a fusion approach selection tool

    NASA Astrophysics Data System (ADS)

    Pohl, C.; Zeng, Y.

    2015-06-01

    During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.

  2. Review of the Applications of Formosat-2 on Rapidly Responding to Global Disasters and Monitoring Earth Environment

    NASA Astrophysics Data System (ADS)

    Liu, C.

    2009-12-01

    Formosat-2 is the first satellite with high-spatial-resolution sensor deployed in a daily-revisit orbit in the world. Together with its agility of pointing ±45 degree both across and along track, we are able to observe each accessible scene from the same angle under the similar illumination conditions. These characteristics make Formosat-2 an ideal satellite for site surveillance. We developed a Formosat-2 automatic image processing system (F-2 AIPS) that can accurately and rapidly process a large amount of Formosat-2 images to produce the higher levels of products, including rigorous band-to-band coregistration, automatic orthorectification, multi-temporal image coregistration and radiance normalization, and pan-sharpening. This system has been successfully employed to rapidly respond to many international disaster events in the past five years, including flood caused by Typhoon Mindulle (2004), landslide caused by Typhoon Aere (2004), South Asia earthquake and tsunami (2004), Hurricane Katrina (2005), California wildfire (2007), Sichuan Earthquake (2008), Typhoon Kalmaegi (2008), Typhoon Sinlaku (2008), Mountain Ali wildfire (2009), Victoria bushfire in Australia (2009), Honduras earthquake (2009), Typhoon Morakot (2009). This paper reviews the applications of Formosat-2 on rapidly responding to global disasters and monitoring earth environment.

  3. Midwave infrared and visible sensor performance modeling: small craft identification discrimination criteria for maritime security

    NASA Astrophysics Data System (ADS)

    Krapels, Keith; Driggers, Ronald G.; Deaver, Dawne; Moker, Steven K.; Palmer, John

    2007-10-01

    The new emphasis on Anti-Terrorism and Force Protection (AT/FP), for both shore and sea platform protection, has resulted in a need for infrared imager design and evaluation tools that demonstrate field performance against U.S. Navy AT/FP requirements. In the design of infrared imaging systems for target acquisition, a discrimination criterion is required for successful sensor realization. It characterizes the difficulty of the task being performed by the observer and varies for different target sets. This criterion is used in both assessment of existing infrared sensor and in the design of new conceptual sensors. We collected 12 small craft signatures (military and civilian) in the visible band during the day and the long-wave and midwave infrared spectra in both the day and the night environments. These signatures were processed to determine the targets' characteristic dimension and contrast. They were also processed to band limit the signature's spatial information content (simulating longer range), and a perception experiment was performed to determine the task difficulty (N50 and V50). The results are presented and can be used for Navy and Coast Guard imaging infrared sensor design and evaluation.

  4. Optical processing of MMW for agile beamsteering and beamforming

    NASA Astrophysics Data System (ADS)

    Sadovnik, Lev

    1994-02-01

    There is little doubt than an electronically steered, stationary scanning antenna offers significant advantages over any gimbaled installation, especially for tactical missile seekers. A scanning phased array antenna reduces the demands on space and power and makes better use of the space available. It is widely believed that the millimeter wave (MMW) band (W-band) is the region of the RF spectrum which provides the best angular resolution. In fact, MMW sensor and seeker technologies have made significant advances in recent years, demonstrating their suitability for autonomous adverse weather, battlefield, and smart munition applications. Moreover, only the use of the W-band can produce an active seeker with imaging capability for on-board target identification.

  5. Monitoring of coalbed water retention ponds in the Powder River Basin using Google Earth images and an Unmanned Aircraft System

    NASA Astrophysics Data System (ADS)

    Zhou, X.; Zhou, Z.; Apple, M. E.; Spangler, L.

    2016-12-01

    To extract methane from unminable seams of coal in the Powder River Basin of Montana and Wyoming, coalbed methane (CBM) water has to be pumped and kept in retention ponds rather than discharged to the vadose zone to mix with the ground water. The water areal coverage of these ponds changes due to evaporation and repetitive refilling. The water quality also changes due to growing of microalgae (unicellular or filamentous including green algae and diatoms), evaporation, and refilling. To estimate the water coverage changes and monitor water quality becomes important for monitoring the CBM water retention ponds to provide timely management plan for the newly pumped CBM water. Conventional methods such as various water indices based on multi-spectral satellite data such as Landsat because of the small pond size ( 100mx100m scale) and low spatial resolution ( 30m scale) of the satellite data. In this study we will present new methods to estimate water coverage and water quality changes using Google Earth images and images collected from an unmanned aircraft system (UAS) (Phantom 2 plus). Because these images have only visible bands (red, green, and blue bands), the conventional water index methods that involve near-infrared bands do not work. We design a new method just based on the visible bands to automatically extract water pixels and the intensity of the water pixel as a proxy for water quality after a series of image processing such as georeferencing, resampling, filtering, etc. Differential GPS positions along the water edges were collected the same day as the images collected from the UAS. Area of the water area was calculated from the GPS positions and used for the validation of the method. Because of the very high resolution ( 10-30 cm scale), the water areal coverage and water quality distribution can be accurately estimated. Since the UAS can be flied any time, water area and quality information can be collected timely.

  6. Image fusion based on millimeter-wave for concealed weapon detection

    NASA Astrophysics Data System (ADS)

    Zhu, Weiwen; Zhao, Yuejin; Deng, Chao; Zhang, Cunlin; Zhang, Yalin; Zhang, Jingshui

    2010-11-01

    This paper describes a novel multi sensors image fusion technology which is presented for concealed weapon detection (CWD). It is known to all, because of the good transparency of the clothes at millimeter wave band, a millimeter wave radiometer can be used to image and distinguish concealed contraband beneath clothes, for example guns, knives, detonator and so on. As a result, we adopt the passive millimeter wave (PMMW) imaging technology for airport security. However, in consideration of the wavelength of millimeter wave and the single channel mechanical scanning, the millimeter wave image has law optical resolution, which can't meet the need of practical application. Therefore, visible image (VI), which has higher resolution, is proposed for the image fusion with the millimeter wave image to enhance the readability. Before the image fusion, a novel image pre-processing which specifics to the fusion of millimeter wave imaging and visible image is adopted. And in the process of image fusion, multi resolution analysis (MRA) based on Wavelet Transform (WT) is adopted. In this way, the experiment result shows that this method has advantages in concealed weapon detection and has practical significance.

  7. CHARACTERIZING THE ATMOSPHERES OF THE HR8799 PLANETS WITH HST/WFC3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajan, Abhijith; Patience, Jennifer; Barman, Travis

    We present results from a Hubble Space Telescope (HST) program characterizing the atmospheres of the outer two planets in the HR8799 system. The images were taken over 15 orbits in three near-infrared (near-IR) medium-band filters—F098M, F127M, and F139M—using the Wide Field Camera 3. One of the three filters is sensitive to a water absorption band inaccessible from ground-based observations, providing a unique probe of the thermal emission from the atmospheres of these young giant planets. The observations were taken at 30 different spacecraft rolls to enable angular differential imaging (ADI), and the full data set was analyzed with the Karhunen–Loévemore » Image Projection routine, an advanced image processing algorithm adapted to work with HST data. To achieve the required high contrast at subarcsecond resolution, we utilized the pointing accuracy of HST in combination with an improved pipeline designed to combine the dithered ADI data with an algorithm designed to both improve the image resolution and accurately measure the photometry. The results include F127M (J) detections of the outer planets, HR8799b and c, and the first detection of HR8799b in the water-band (F139M) filter. The F127M photometry for HR8799c agrees well with fitted atmospheric models, resolving the longstanding difficulty in consistently modeling the near-IR flux of the planet.« less

  8. VizieR Online Data Catalog: Merging galaxies with tidal tails in COSMOS to z=1 (Wen+, 2016)

    NASA Astrophysics Data System (ADS)

    Wen, Z. Z.; Zheng, X. Z.

    2017-02-01

    Our study utilizes the public data and catalogs from multi-band deep surveys of the COSMOS field. The UltraVISTA survey (McCracken+ 2012, J/A+A/544/A156) provides ultra-deep near-IR imaging observations of this field in the Y,J,H, and Ks-band, as well as a narrow band (NB118). The HST/ACS I-band imaging data are publicly available, allowing us to measure morphologies in the rest-frame optical for galaxies at z<=1. The HST/ACS I-band images reach a 5σ depth of 27.2 magnitude for point sources. (1 data file).

  9. Optimal Band Ratio Analysis of WORLDVIEW-3 Imagery for Bathymetry of Shallow Rivers (case Study: Sarca River, Italy)

    NASA Astrophysics Data System (ADS)

    Niroumand-Jadidi, M.; Vitti, A.

    2016-06-01

    The Optimal Band Ratio Analysis (OBRA) could be considered as an efficient technique for bathymetry from optical imagery due to its robustness on substrate variability. This point receives more attention for very shallow rivers where different substrate types can contribute remarkably into total at-sensor radiance. The OBRA examines the total possible pairs of spectral bands in order to identify the optimal two-band ratio that its log transformation yields a strong linear relation with field measured water depths. This paper aims at investigating the effectiveness of additional spectral bands of newly launched WorldView-3 (WV-3) imagery in the visible and NIR spectrum through OBRA for retrieving water depths in shallow rivers. In this regard, the OBRA is performed on a WV-3 image as well as a GeoEye image of a small Alpine river in Italy. In-situ depths are gathered in two river reaches using a precise GPS device. In each testing scenario, 50% of the field data is used for calibration of the model and the remained as independent check points for accuracy assessment. In general, the effect of changes in water depth is highly pronounced in longer wavelengths (i.e. NIR) due to high and rapid absorption of light in this spectrum as long as it is not saturated. As the studied river is shallow, NIR portion of the spectrum has not been reduced so much not to reach the riverbed; making use of the observed radiance over this spectral range as denominator has shown a strong correlation through OBRA. More specifically, tightly focused channels of red-edge, NIR-1 and NIR-2 provide a wealth of choices for OBRA rather than a single NIR band of conventional 4-band images (e.g. GeoEye). This advantage of WV-3 images is outstanding as well for choosing the optimal numerator of the ratio model. Coastal-blue and yellow bands of WV-3 are identified as proper numerators while only green band of the GeoEye image contributed to a reliable correlation of image derived values and field measured depths. According to the results, the additional and narrow spectral bands of WV-3 image lead to an average determination coefficient of 67% in two river segments, which is 10% higher than that of obtained from the 4-band GeoEye image. In addition, RMSEs of depth estimations are calculated as 4 cm and 6 cm respectively for WV-3 and GeoEye images, considering the optimal band ratio.

  10. Ortho-Rectification of Narrow Band Multi-Spectral Imagery Assisted by Dslr RGB Imagery Acquired by a Fixed-Wing Uas

    NASA Astrophysics Data System (ADS)

    Rau, J.-Y.; Jhan, J.-P.; Huang, C.-Y.

    2015-08-01

    Miniature Multiple Camera Array (MiniMCA-12) is a frame-based multilens/multispectral sensor composed of 12 lenses with narrow band filters. Due to its small size and light weight, it is suitable to mount on an Unmanned Aerial System (UAS) for acquiring high spectral, spatial and temporal resolution imagery used in various remote sensing applications. However, due to its wavelength range is only 10 nm that results in low image resolution and signal-to-noise ratio which are not suitable for image matching and digital surface model (DSM) generation. In the meantime, the spectral correlation among all 12 bands of MiniMCA images are low, it is difficult to perform tie-point matching and aerial triangulation at the same time. In this study, we thus propose the use of a DSLR camera to assist automatic aerial triangulation of MiniMCA-12 imagery and to produce higher spatial resolution DSM for MiniMCA12 ortho-image generation. Depending on the maximum payload weight of the used UAS, these two kinds of sensors could be collected at the same time or individually. In this study, we adopt a fixed-wing UAS to carry a Canon EOS 5D Mark2 DSLR camera and a MiniMCA-12 multi-spectral camera. For the purpose to perform automatic aerial triangulation between a DSLR camera and the MiniMCA-12, we choose one master band from MiniMCA-12 whose spectral range has overlap with the DSLR camera. However, all lenses of MiniMCA-12 have different perspective centers and viewing angles, the original 12 channels have significant band misregistration effect. Thus, the first issue encountered is to reduce the band misregistration effect. Due to all 12 MiniMCA lenses being frame-based, their spatial offsets are smaller than 15 cm and all images are almost 98% overlapped, we thus propose a modified projective transformation (MPT) method together with two systematic error correction procedures to register all 12 bands of imagery on the same image space. It means that those 12 bands of images acquired at the same exposure time will have same interior orientation parameters (IOPs) and exterior orientation parameters (EOPs) after band-to-band registration (BBR). Thus, in the aerial triangulation stage, the master band of MiniMCA-12 was treated as a reference channel to link with DSLR RGB images. It means, all reference images from the master band of MiniMCA-12 and all RGB images were triangulated at the same time with same coordinate system of ground control points (GCP). Due to the spatial resolution of RGB images is higher than the MiniMCA-12, the GCP can be marked on the RGB images only even they cannot be recognized on the MiniMCA images. Furthermore, a one meter gridded digital surface model (DSM) is created by the RGB images and applied to the MiniMCA imagery for ortho-rectification. Quantitative error analyses show that the proposed BBR scheme can achieve 0.33 pixels of average misregistration residuals length and the co-registration errors among 12 MiniMCA ortho-images and between MiniMCA and Canon RGB ortho-images are all less than 0.6 pixels. The experimental results demonstrate that the proposed method is robust, reliable and accurate for future remote sensing applications.

  11. Operationalizing a Research Sensor: MODIS to VIIRS

    NASA Astrophysics Data System (ADS)

    Grant, K. D.; Miller, S. W.; Puschell, J.

    2012-12-01

    The National Oceanic and Atmospheric Administration (NOAA) and NASA are jointly acquiring the next-generation civilian environmental satellite system: the Joint Polar Satellite System (JPSS). JPSS will replace the afternoon orbit component and ground processing system of the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA. The JPSS satellite will carry a suite of sensors designed to collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The primary sensor for the JPSS mission is the Visible/Infrared Imager Radiometer Suite (VIIRS) developed by Raytheon Space and Airborne Systems (SAS). The ground processing system for the JPSS mission is known as the Common Ground System (JPSS CGS), and consists of a Command, Control, and Communications Segment (C3S) and the Interface Data Processing Segment (IDPS) which are both developed by Raytheon Intelligence and Information Systems (IIS). The Moderate Resolution Imaging Spectroradiometer (MODIS) was developed by Raytheon SAS for the NASA Earth Observing System (EOS) as a research instrument to capture data in 36 spectral bands, ranging in wavelength from 0.4 μm to 14.4 μm and at varying spatial resolutions (2 bands at 250 m, 5 bands at 500 m and 29 bands at 1 km). MODIS data provides unprecedented insight into large-scale Earth system science questions related to cloud and aerosol characteristics, surface emissivity and processes occurring in the oceans, on land, and in the lower atmosphere. MODIS has flown on the EOS Terra satellite since 1999 and on the EOS Aqua satellite since 2002 and provided excellent data for scientific research and operational use for more than a decade. The value of MODIS-derived products for operational environmental monitoring motivated led to the development of an operational counterpart to MODIS for the next-generation polar-orbiting environmental satellites, the Visible/Infrared Imager Radiometer Suite (VIIRS). VIIRS combines the demonstrated high value spectral coverage and radiometric accuracy of MODIS with the legacy spectral bands and radiometric accuracy of the Advanced Very High Resolution Radiometer (AVHRR) and the high spatial resolution (0.75 km) of the Operational Linescan System (OLS). Except for MODIS bands designed for deriving vertical temperature and humidity structure in the atmosphere, VIIRS uses identical or very similar bands from MODIS that have the most interest and usefulness to operational customers in NOAA, the USAF and the USN. The development of VIIRS and JPSS reaps the benefit of investments in MODIS and the NASA EOS and the early development of operational algorithms by NOAA and DoD using MODIS data. This presentation will cover the different aspects of transitioning a research system into an operational system. These aspects include: (1) sensor (hardware & software) operationalization, (2) system performance operational factors, (3) science changes to algorithms reflecting the operational performance factors, and (4) the operationalization and incorporation of the science into a fully 24 x 7 production system, tasked with meeting stringent operational needs. Benefits of early operationalization are discussed along with suggested areas for improvement in this process that could benefit future work such as operationalizing Earth Science Decadal Survey missions.

  12. Full-frame, programmable hyperspectral imager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Love, Steven P.; Graff, David L.

    A programmable, many-band spectral imager based on addressable spatial light modulators (ASLMs), such as micro-mirror-, micro-shutter- or liquid-crystal arrays, is described. Capable of collecting at once, without scanning, a complete two-dimensional spatial image with ASLM spectral processing applied simultaneously to the entire image, the invention employs optical assemblies wherein light from all image points is forced to impinge at the same angle onto the dispersing element, eliminating interplay between spatial position and wavelength. This is achieved, as examples, using telecentric optics to image light at the required constant angle, or with micro-optical array structures, such as micro-lens- or capillary arrays,more » that aim the light on a pixel-by-pixel basis. Light of a given wavelength then emerges from the disperser at the same angle for all image points, is collected at a unique location for simultaneous manipulation by the ASLM, then recombined with other wavelengths to form a final spectrally-processed image.« less

  13. LANDSAT 4 investigations of Thematic Mapper and multispectral scanner applications. [Death Valley, California; Silver Bell Copper Mine, Arizona, and Dulles Airport near Washington, D.C.

    NASA Technical Reports Server (NTRS)

    Lauer, D. T. (Principal Investigator)

    1984-01-01

    The optimum index factor package was used to choose TM band for color compositing. Processing techniques were also used on TM data over several sites to: (1) reduce the amount of data that needs to be processed and analyzed by using statistical methods or by combining full-resolution products with spatially compressed products; (2) digitally process small subareas to improve the visual appearance of large-scale products or to merge different-resolution image data; and (3) evaluate and compare the information content of the different three-band combinations that can be made using the TM data. Results indicate that for some applications the added spectral information over MSS is even more important than the TM's increased spatial resolution.

  14. Real-time object tracking based on scale-invariant features employing bio-inspired hardware.

    PubMed

    Yasukawa, Shinsuke; Okuno, Hirotsugu; Ishii, Kazuo; Yagi, Tetsuya

    2016-09-01

    We developed a vision sensor system that performs a scale-invariant feature transform (SIFT) in real time. To apply the SIFT algorithm efficiently, we focus on a two-fold process performed by the visual system: whole-image parallel filtering and frequency-band parallel processing. The vision sensor system comprises an active pixel sensor, a metal-oxide semiconductor (MOS)-based resistive network, a field-programmable gate array (FPGA), and a digital computer. We employed the MOS-based resistive network for instantaneous spatial filtering and a configurable filter size. The FPGA is used to pipeline process the frequency-band signals. The proposed system was evaluated by tracking the feature points detected on an object in a video. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Co-registered Topographical, Band Excitation Nanomechanical, and Mass Spectral Imaging Using a Combined Atomic Force Microscopy/Mass Spectrometry Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ovchinnikova, Olga S.; Tai, Tamin; Bocharova, Vera

    The advancement of a hybrid atomic force microscopy/mass spectrometry imaging platform demonstrating for the first time co-registered topographical, band excitation nanomechanical, and mass spectral imaging of a surface using a single instrument is reported. The mass spectrometry-based chemical imaging component of the system utilized nanothermal analysis probes for pyrolytic surface sampling followed by atmospheric pressure chemical ionization of the gas phase species produced with subsequent mass analysis. We discuss the basic instrumental setup and operation and the multimodal imaging capability and utility are demonstrated using a phase separated polystyrene/poly(2-vinylpyridine) polymer blend thin film. The topography and band excitation images showedmore » that the valley and plateau regions of the thin film surface were comprised primarily of one of the two polymers in the blend with the mass spectral chemical image used to definitively identify the polymers at the different locations. Data point pixel size for the topography (390 nm x 390 nm), band excitation (781 nm x 781 nm), mass spectrometry (690 nm x 500 nm) images was comparable and submicrometer in all three cases, but the data voxel size for each of the three images was dramatically different. The topography image was uniquely a surface measurement, whereas the band excitation image included information from an estimated 10 nm deep into the sample and the mass spectral image from 110-140 nm in depth. Moreover, because of this dramatic sampling depth variance, some differences in the band excitation and mass spectrometry chemical images were observed and were interpreted to indicate the presence of a buried interface in the sample. The spatial resolution of the mass spectral image was estimated to be between 1.5 m 2.6 m, based on the ability to distinguish surface features in that image that were also observed in the other images.« less

  16. Co-registered Topographical, Band Excitation Nanomechanical, and Mass Spectral Imaging Using a Combined Atomic Force Microscopy/Mass Spectrometry Platform

    DOE PAGES

    Ovchinnikova, Olga S.; Tai, Tamin; Bocharova, Vera; ...

    2015-03-18

    The advancement of a hybrid atomic force microscopy/mass spectrometry imaging platform demonstrating for the first time co-registered topographical, band excitation nanomechanical, and mass spectral imaging of a surface using a single instrument is reported. The mass spectrometry-based chemical imaging component of the system utilized nanothermal analysis probes for pyrolytic surface sampling followed by atmospheric pressure chemical ionization of the gas phase species produced with subsequent mass analysis. We discuss the basic instrumental setup and operation and the multimodal imaging capability and utility are demonstrated using a phase separated polystyrene/poly(2-vinylpyridine) polymer blend thin film. The topography and band excitation images showedmore » that the valley and plateau regions of the thin film surface were comprised primarily of one of the two polymers in the blend with the mass spectral chemical image used to definitively identify the polymers at the different locations. Data point pixel size for the topography (390 nm x 390 nm), band excitation (781 nm x 781 nm), mass spectrometry (690 nm x 500 nm) images was comparable and submicrometer in all three cases, but the data voxel size for each of the three images was dramatically different. The topography image was uniquely a surface measurement, whereas the band excitation image included information from an estimated 10 nm deep into the sample and the mass spectral image from 110-140 nm in depth. Moreover, because of this dramatic sampling depth variance, some differences in the band excitation and mass spectrometry chemical images were observed and were interpreted to indicate the presence of a buried interface in the sample. The spatial resolution of the mass spectral image was estimated to be between 1.5 m 2.6 m, based on the ability to distinguish surface features in that image that were also observed in the other images.« less

  17. Near-UV Sources in the Hubble Ultra Deep Field: The Catalog

    NASA Technical Reports Server (NTRS)

    Gardner, Jonathan P.; Voyrer, Elysse; de Mello, Duilia F.; Siana, Brian; Quirk, Cori; Teplitz, Harry I.

    2009-01-01

    The catalog from the first high resolution U-band image of the Hubble Ultra Deep Field, taken with Hubble s Wide Field Planetary Camera 2 through the F300W filter, is presented. We detect 96 U-band objects and compare and combine this catalog with a Great Observatories Origins Deep Survey (GOODS) B-selected catalog that provides B, V, i, and z photometry, spectral types, and photometric redshifts. We have also obtained Far-Ultraviolet (FUV, 1614 Angstroms) data with Hubble s Advanced Camera for Surveys Solar Blind Channel (ACS/SBC) and with Galaxy Evolution Explorer (GALEX). We detected 31 sources with ACS/SBC, 28 with GALEX/FUV, and 45 with GALEX/NUV. The methods of observations, image processing, object identification, catalog preparation, and catalog matching are presented.

  18. Single sensor processing to obtain high resolution color component signals

    NASA Technical Reports Server (NTRS)

    Glenn, William E. (Inventor)

    2010-01-01

    A method for generating color video signals representative of color images of a scene includes the following steps: focusing light from the scene on an electronic image sensor via a filter having a tri-color filter pattern; producing, from outputs of the sensor, first and second relatively low resolution luminance signals; producing, from outputs of the sensor, a relatively high resolution luminance signal; producing, from a ratio of the relatively high resolution luminance signal to the first relatively low resolution luminance signal, a high band luminance component signal; producing, from outputs of the sensor, relatively low resolution color component signals; and combining each of the relatively low resolution color component signals with the high band luminance component signal to obtain relatively high resolution color component signals.

  19. Space Radar Image of Bahia

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a color composite image of southern Bahia, Brazil, centered at 15.22 degree south latitude and 39.07 degrees west longitude. The image was acquired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar aboard the space shuttle Endeavour on its 38th orbit of Earth on October 2, 1994. The image covers an area centered over the Una Biological Reserve, one the largest protected areas in northeastern Brazil. The 7,000-hectare reserve is administered by the Brazilian Institute for the Environment and is part of the larger Atlantic coastal forest, a narrow band of rain forest extending along the eastern coast of Brazil. The Atlantic coastal forest of southern Bahia is one of the world's most threatened and diverse ecosystems. Due to widespread settlement, only 2 to 5 percent of the original forest cover remains. Yet the region still contains an astounding variety of plants and animals, including a large number of endemic species. More than half of the region's tree species and 80 percent of its animal species are indigenous and found nowhere else on Earth. The Una Reserve is also the only federally protected habitat for the golden-headed lion tamarin, the yellow-breasted capuchin monkey and many other endangered species. In the past few years, scientists from Brazilian and international conservation organizations have coordinated efforts to study the biological diversity of this region and to develop practical and economically viable options for preserving the remaining primary forests in southern Bahia. The shuttle imaging radar is used in this study to identify various land uses and vegetation types, including remaining patches of primary forest, cabruca forest (cacao planted in the understory of the native forest), secondary forest, pasture and coastal mangrove. Standard remote-sensing technology that relies on light reflected from the forest canopy cannot accurately distinguish between cabruca and undisturbed forest. Optical remote sensing is also limited by the nearly continuous cloud cover in the region and heavy rainfall, which occurs more than 150 days each year. The ability of the shuttle radars to 'see' through the forest canopy to the cultivated cacao below -- independent of weather or sunlight conditions --will allow researchers to distinguish forest from cabruca in unprecedented detail. This SIR-C/X-SAR image was produced by assigning red to the L-band, green to the C-band and blue to the X-band. The Una Reserve is located in the middle of the image west of the coastline and slightly northwest of Comandatuba River. The reserve's primary forests are easily detected by the pink areas in the image. The intensity of red in these areas is due to the high density of forest vegetation (biomass) detected by the radar's L-band (horizontally transmitted and vertically received) channel. Secondary forest is visible along the reserve's eastern border. The Serrado Mar mountain range is located in the top left portion of the image. Cabruca forest to the west of Una Reserve has a different texture and a yellow color. The removal of understory in cabruca forest reduces its biomass relative to primary forest, which changes the L-band and C-band penetration depth and returns, and produces a different texture and color in the image. The region along the Atlantic is mainly mangrove swamp, agricultural fields and urban areas. The high intensity of blue in this region is a result of increasing X-band return in areas covered with swamp and low vegetation. The image clearly separates the mangrove region (east of coastal Highway 001, shown in blue) from the taller and dryer forest west of the highway. The high resolution capability of SIR-C/X-SAR imaging and the sensitivity of its frequency and polarization channels to various land covers will be used for monitoring and mapping areas of importance for conservation. Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar(SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI) with the Deutsche Forschungsanstalt fuer luft und Raumfahrt e.V.(DLR), the major partner in science, operations and data processing of X-SAR.

  20. Ability of Magnetic Resonance Elastography to Assess Taut Bands

    PubMed Central

    Chen, Qingshan; Basford, Jeffery; An, Kai-Nan

    2008-01-01

    Background Myofascial taut bands are central to diagnosis of myofascial pain. Despite their importance, we still lack either a laboratory test or imaging technique capable of objectively confirming either their nature or location. This study explores the ability of magnetic resonance elastography to localize and investigate the mechanical properties of myofascial taut bands on the basis of their effects on shear wave propagation. Methods This study was conducted in three phases. The first involved the imaging of taut bands in gel phantoms, the second a finite element modeling of the phantom experiment, and the third a preliminary evaluation involving eight human subjects-four of whom had, and four of whom did not have myofascial pain. Experiments were performed with a 1.5 Tesla magnetic resonance imaging scanner. Shear wave propagation was imaged and shear stiffness was reconstructed using matched filtering stiffness inversion algorithms. Findings The gel phantom imaging and finite element calculation experiments supported our hypothesis that taut bands can be imaged based on its outstanding shear stiffness. The preliminary human study showed a statistically significant 50-100% (p=0.01) increase of shear stiffness in the taut band regions of the involved subjects relative to that of the controls or in nearby uninvolved muscle. Interpretation This study suggests that magnetic resonance elastography may have a potential for objectively characterizing myofascial taut bands that have been up to now detectable only by the clinician's fingers. PMID:18206282

  1. The use of Sentinel-2 imagery for seagrass mapping: Kalloni Gulf (Lesvos Island, Greece) case study

    NASA Astrophysics Data System (ADS)

    Topouzelis, Konstantinos; Charalampis Spondylidis, Spyridon; Papakonstantinou, Apostolos; Soulakellis, Nikolaos

    2016-08-01

    Seagrass meadows play a significant role in ecosystems by stabilizing sediment and improving water clarity, which enhances seagrass growing conditions. It is high on the priority of EU legislation to map and protect them. The traditional use of medium spatial resolution satellite imagery e.g. Landsat-8 (30m) is very useful for mapping seagrass meadows on a regional scale. However, the availability of Sentinel-2 data, the recent ESA's satellite with its payload Multi-Spectral Instrument (MSI) is expected to improve the mapping accuracy. MSI designed to improve coastline studies due to its enhanced spatial and spectral capabilities e.g. optical bands with 10m spatial resolution. The present work examines the quality of Sentinel-2 images for seagrass mapping, the ability of each band in detection and discrimination of different habitats and estimates the accuracy of seagrass mapping. After pre-processing steps, e.g. radiometric calibration and atmospheric correction, image classified into four classes. Classification classes included sub-bottom composition e.g. seagrass, soft bottom, and hard bottom. Concrete vectors describing the areas covered by seagrass extracted from the high-resolution satellite image and used as in situ measurements. The developed methodology applied in the Gulf of Kalloni, (Lesvos Island - Greece). Results showed that Sentinel-2 images can be robustly used for seagrass mapping due to their spatial resolution, band availability and radiometric accuracy.

  2. Active multispectral imaging system for photodiagnosis and personalized phototherapies

    NASA Astrophysics Data System (ADS)

    Ugarte, M. F.; Chávarri, L.; Briz, S.; Padrón, V. M.; García-Cuesta, E.

    2014-10-01

    The proposed system has been designed to identify dermatopathologies or to apply personalized phototherapy treatments. The system emits electromagnetic waves in different spectral bands in the range of visible and near infrared to irradiate the target (skin or any other object) to be spectrally characterized. Then, an imaging sensor measures the target response to the stimulus at each spectral band and, after processing, the system displays in real time two images. In one of them the value of each pixel corresponds to the more reflected wavenumber whereas in the other image the pixel value represents the energy absorbed at each band. The diagnosis capability of this system lies in its multispectral design, and the phototherapy treatments are adapted to the patient and his lesion by measuring his absorption capability. This "in situ" absorption measurement allows us to determine the more appropriate duration of the treatment according to the wavelength and recommended dose. The main advantages of this system are its low cost, it does not have moving parts or complex mechanisms, it works in real time, and it is easy to handle. For these reasons its widespread use in dermatologist consultation would facilitate the work of the dermatologist and would improve the efficiency of diagnosis and treatment. In fact the prototype has already been successfully applied to pathologies such as carcinomas, melanomas, keratosis, and nevi.

  3. Active multispectral imaging system for photodiagnosis and personalized phototherapies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ugarte, M. F., E-mail: marta.ugarte@uem.es, E-mail: sbriz@fis.uc3m.es; Chávarri, L.; Padrón, V. M.

    2014-10-15

    The proposed system has been designed to identify dermatopathologies or to apply personalized phototherapy treatments. The system emits electromagnetic waves in different spectral bands in the range of visible and near infrared to irradiate the target (skin or any other object) to be spectrally characterized. Then, an imaging sensor measures the target response to the stimulus at each spectral band and, after processing, the system displays in real time two images. In one of them the value of each pixel corresponds to the more reflected wavenumber whereas in the other image the pixel value represents the energy absorbed at eachmore » band. The diagnosis capability of this system lies in its multispectral design, and the phototherapy treatments are adapted to the patient and his lesion by measuring his absorption capability. This “in situ” absorption measurement allows us to determine the more appropriate duration of the treatment according to the wavelength and recommended dose. The main advantages of this system are its low cost, it does not have moving parts or complex mechanisms, it works in real time, and it is easy to handle. For these reasons its widespread use in dermatologist consultation would facilitate the work of the dermatologist and would improve the efficiency of diagnosis and treatment. In fact the prototype has already been successfully applied to pathologies such as carcinomas, melanomas, keratosis, and nevi.« less

  4. Band registration of tuneable frame format hyperspectral UAV imagers in complex scenes

    NASA Astrophysics Data System (ADS)

    Honkavaara, Eija; Rosnell, Tomi; Oliveira, Raquel; Tommaselli, Antonio

    2017-12-01

    A recent revolution in miniaturised sensor technology has provided markets with novel hyperspectral imagers operating in the frame format principle. In the case of unmanned aerial vehicle (UAV) based remote sensing, the frame format technology is highly attractive in comparison to the commonly utilised pushbroom scanning technology, because it offers better stability and the possibility to capture stereoscopic data sets, bringing an opportunity for 3D hyperspectral object reconstruction. Tuneable filters are one of the approaches for capturing multi- or hyperspectral frame images. The individual bands are not aligned when operating a sensor based on tuneable filters from a mobile platform, such as UAV, because the full spectrum recording is carried out in the time-sequential principle. The objective of this investigation was to study the aspects of band registration of an imager based on tuneable filters and to develop a rigorous and efficient approach for band registration in complex 3D scenes, such as forests. The method first determines the orientations of selected reference bands and reconstructs the 3D scene using structure-from-motion and dense image matching technologies. The bands, without orientation, are then matched to the oriented bands accounting the 3D scene to provide exterior orientations, and afterwards, hyperspectral orthomosaics, or hyperspectral point clouds, are calculated. The uncertainty aspects of the novel approach were studied. An empirical assessment was carried out in a forested environment using hyperspectral images captured with a hyperspectral 2D frame format camera, based on a tuneable Fabry-Pérot interferometer (FPI) on board a multicopter and supported by a high spatial resolution consumer colour camera. A theoretical assessment showed that the method was capable of providing band registration accuracy better than 0.5-pixel size. The empirical assessment proved the performance and showed that, with the novel method, most parts of the band misalignments were less than the pixel size. Furthermore, it was shown that the performance of the band alignment was dependent on the spatial distance from the reference band.

  5. Cross Correlation versus Normalized Mutual Information on Image Registration

    NASA Technical Reports Server (NTRS)

    Tan, Bin; Tilton, James C.; Lin, Guoqing

    2016-01-01

    This is the first study to quantitatively assess and compare cross correlation and normalized mutual information methods used to register images in subpixel scale. The study shows that the normalized mutual information method is less sensitive to unaligned edges due to the spectral response differences than is cross correlation. This characteristic makes the normalized image resolution a better candidate for band to band registration. Improved band-to-band registration in the data from satellite-borne instruments will result in improved retrievals of key science measurements such as cloud properties, vegetation, snow and fire.

  6. Automatic cloud coverage assessment of Formosat-2 image

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Hsien

    2011-11-01

    Formosat-2 satellite equips with the high-spatial-resolution (2m ground sampling distance) remote sensing instrument. It has been being operated on the daily-revisiting mission orbit by National Space organization (NSPO) of Taiwan since May 21 2004. NSPO has also serving as one of the ground receiving stations for daily processing the received Formosat- 2 images. The current cloud coverage assessment of Formosat-2 image for NSPO Image Processing System generally consists of two major steps. Firstly, an un-supervised K-means method is used for automatically estimating the cloud statistic of Formosat-2 image. Secondly, manual estimation of cloud coverage from Formosat-2 image is processed by manual examination. Apparently, a more accurate Automatic Cloud Coverage Assessment (ACCA) method certainly increases the efficiency of processing step 2 with a good prediction of cloud statistic. In this paper, mainly based on the research results from Chang et al, Irish, and Gotoh, we propose a modified Formosat-2 ACCA method which considered pre-processing and post-processing analysis. For pre-processing analysis, cloud statistic is determined by using un-supervised K-means classification, Sobel's method, Otsu's method, non-cloudy pixels reexamination, and cross-band filter method. Box-Counting fractal method is considered as a post-processing tool to double check the results of pre-processing analysis for increasing the efficiency of manual examination.

  7. Electrophysiological indices of visual food cue-reactivity. Differences in obese, overweight and normal weight women.

    PubMed

    Hume, David John; Howells, Fleur Margaret; Rauch, H G Laurie; Kroff, Jacolene; Lambert, Estelle Victoria

    2015-02-01

    Heightened food cue-reactivity in overweight and obese individuals has been related to aberrant functioning of neural circuitry implicated in motivational behaviours and reward-seeking. Here we explore the neurophysiology of visual food cue-reactivity in overweight and obese women, as compared with normal weight women, by assessing differences in cortical arousal and attentional processing elicited by food and neutral image inserts in a Stroop task with record of EEG spectral band power and ERP responses. Results show excess right frontal (F8) and left central (C3) relative beta band activity in overweight women during food task performance (indicative of pronounced early visual cue-reactivity) and blunted prefrontal (Fp1 and Fp2) theta band activity in obese women during office task performance (suggestive of executive dysfunction). Moreover, as compared to normal weight women, food images elicited greater right parietal (P4) ERP P200 amplitude in overweight women (denoting pronounced early attentional processing) and shorter right parietal (P4) ERP P300 latency in obese women (signifying enhanced and efficient maintained attentional processing). Differential measures of cortical arousal and attentional processing showed significant correlations with self-reported eating behaviour and body shape dissatisfaction, as well as with objectively assessed percent fat mass. The findings of the present study suggest that heightened food cue-reactivity can be neurophysiologically measured, that different neural circuits are implicated in the pathogenesis of overweight and obesity, and that EEG techniques may serve useful in the identification of endophenotypic markers associated with an increased risk of externally mediated food consumption. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Automated cloud and shadow detection and filling using two-date Landsat imagery in the United States

    USGS Publications Warehouse

    Jin, Suming; Homer, Collin G.; Yang, Limin; Xian, George; Fry, Joyce; Danielson, Patrick; Townsend, Philip A.

    2013-01-01

    A simple, efficient, and practical approach for detecting cloud and shadow areas in satellite imagery and restoring them with clean pixel values has been developed. Cloud and shadow areas are detected using spectral information from the blue, shortwave infrared, and thermal infrared bands of Landsat Thematic Mapper or Enhanced Thematic Mapper Plus imagery from two dates (a target image and a reference image). These detected cloud and shadow areas are further refined using an integration process and a false shadow removal process according to the geometric relationship between cloud and shadow. Cloud and shadow filling is based on the concept of the Spectral Similarity Group (SSG), which uses the reference image to find similar alternative pixels in the target image to serve as replacement values for restored areas. Pixels are considered to belong to one SSG if the pixel values from Landsat bands 3, 4, and 5 in the reference image are within the same spectral ranges. This new approach was applied to five Landsat path/rows across different landscapes and seasons with various types of cloud patterns. Results show that almost all of the clouds were captured with minimal commission errors, and shadows were detected reasonably well. Among five test scenes, the lowest producer's accuracy of cloud detection was 93.9% and the lowest user's accuracy was 89%. The overall cloud and shadow detection accuracy ranged from 83.6% to 99.3%. The pixel-filling approach resulted in a new cloud-free image that appears seamless and spatially continuous despite differences in phenology between the target and reference images. Our methods offer a straightforward and robust approach for preparing images for the new 2011 National Land Cover Database production.

  9. Investigation of TM Band-to-band Registration Using the JSC Registration Processor

    NASA Technical Reports Server (NTRS)

    Yao, S. S.; Amis, M. L.

    1984-01-01

    The JSC registration processor performs scene-to-scene (or band-to-band) correlation based on edge images. The edge images are derived from a percentage of the edge pixels calculated from the raw scene data, excluding clouds and other extraneous data in the scene. Correlations are performed on patches (blocks) of the edge images, and the correlation peak location in each patch is estimated iteratively to fractional pixel location accuracy. Peak offset locations from all patches over the scene are then considered together, and a variety of tests are made to weed out outliers and other inconsistencies before a distortion model is assumed. Thus, the correlation peak offset locations in each patch indicate quantitatively how well the two TM bands register to each other over that patch of scene data. The average of these offsets indicate the overall accuracies of the band-to-band registration. The registration processor was also used to register one acquisition to another acquisition of multitemporal TM data acquired over the same ground track. Band 4 images from both acquisitions were correlated and an rms error of a fraction of a pixel was routinely obtained.

  10. VizieR Online Data Catalog: HD61005 SPHERE H and Ks images (Olofsson+, 2016)

    NASA Astrophysics Data System (ADS)

    Olofsson, J.; Samland, M.; Avenhaus, H.; Caceres, C.; Henning, T.; Moor, A.; Milli, J.; Canovas, H.; Quanz, S. P.; Schreiber, M. R.; Augereau, J.-C.; Bayo, A.; Bazzon, A.; Beuzit, J.-L.; Boccaletti, A.; Buenzli, E.; Casassus, S.; Chauvin, G.; Dominik, C.; Desidera, S.; Feldt, M.; Gratton, R.; Janson, M.; Lagrange, A.-M.; Langlois, M.; Lannier, J.; Maire, A.-L.; Mesa, D.; Pinte, C.; Rouan, D.; Salter, G.; Thalmann, C.; Vigan, A.

    2016-05-01

    The fits files contains the reduced ADI and DPI SPHERE observations used to produce Fig. 1 of the paper. Besides the primary card, the files consists of 6 additional ImageHDU. The first and second one contain the SPHERE IRDIS ADI H band observations and the noise map. The third and fourth contain the SPHERE IRDIS ADI Ks band observations and the corresponding noise map. Finally, the fifth and sixth ImageHDU contain the SPHERE IRDIS DPI H band data as well as the noise map. Each ADI image has 1024x1024 pixels, while the DPI images have 1800x1800 pixels. The header of the primary card contains the pixel sizes for each datasets and the wavelengths of the H and K band observations. (2 data files).

  11. A Multispectral Image Creating Method for a New Airborne Four-Camera System with Different Bandpass Filters

    PubMed Central

    Li, Hanlun; Zhang, Aiwu; Hu, Shaoxing

    2015-01-01

    This paper describes an airborne high resolution four-camera multispectral system which mainly consists of four identical monochrome cameras equipped with four interchangeable bandpass filters. For this multispectral system, an automatic multispectral data composing method was proposed. The homography registration model was chosen, and the scale-invariant feature transform (SIFT) and random sample consensus (RANSAC) were used to generate matching points. For the difficult registration problem between visible band images and near-infrared band images in cases lacking manmade objects, we presented an effective method based on the structural characteristics of the system. Experiments show that our method can acquire high quality multispectral images and the band-to-band alignment error of the composed multiple spectral images is less than 2.5 pixels. PMID:26205264

  12. Differential conductance (dI/dV) imaging of a heterojunction-nanorod

    NASA Astrophysics Data System (ADS)

    Kundu, Biswajit; Bera, Abhijit; Pal, Amlan J.

    2017-03-01

    Through scanning tunneling spectroscopy, we envisage imaging a heterostructure, namely a junction formed in a single nanorod. While the differential conductance spectrum provides location of conduction and valence band edges, dI/dV images record energy levels of materials. Such dI/dV images at different voltages allowed us to view p- and n-sections of heterojunction nanorods and more importantly the depletion region in such a junction that has a type-II band alignment. Viewing of selective sections in a heterojunction occurred due to band-bending in the junction and is correlated to the density of states spectrum of the individual semiconductors. The dI/dV images recorded at different voltages could be used to generate a band diagram of a pn junction.

  13. Markerless positional verification using template matching and triangulation of kV images acquired during irradiation for lung tumors treated in breath-hold

    NASA Astrophysics Data System (ADS)

    Hazelaar, Colien; Dahele, Max; Mostafavi, Hassan; van der Weide, Lineke; Slotman, Ben; Verbakel, Wilko

    2018-06-01

    Lung tumors treated in breath-hold are subject to inter- and intra-breath-hold variations, which makes tumor position monitoring during each breath-hold important. A markerless technique is desirable, but limited tumor visibility on kV images makes this challenging. We evaluated if template matching  +  triangulation of kV projection images acquired during breath-hold stereotactic treatments could determine 3D tumor position. Band-pass filtering and/or digital tomosynthesis (DTS) were used as image pre-filtering/enhancement techniques. On-board kV images continuously acquired during volumetric modulated arc irradiation of (i) a 3D-printed anthropomorphic thorax phantom with three lung tumors (n  =  6 stationary datasets, n  =  2 gradually moving), and (ii) four patients (13 datasets) were analyzed. 2D reference templates (filtered DRRs) were created from planning CT data. Normalized cross-correlation was used for 2D matching between templates and pre-filtered/enhanced kV images. For 3D verification, each registration was triangulated with multiple previous registrations. Generally applicable image processing/algorithm settings for lung tumors in breath-hold were identified. For the stationary phantom, the interquartile range of the 3D position vector was on average 0.25 mm for 12° DTS  +  band-pass filtering (average detected positions in 2D  =  99.7%, 3D  =  96.1%, and 3D excluding first 12° due to triangulation angle  =  99.9%) compared to 0.81 mm for band-pass filtering only (55.8/52.9/55.0%). For the moving phantom, RMS errors for the lateral/longitudinal/vertical direction after 12° DTS  +  band-pass filtering were 1.5/0.4/1.1 mm and 2.2/0.3/3.2 mm. For the clinical data, 2D position was determined for at least 93% of each dataset and 3D position excluding first 12° for at least 82% of each dataset using 12° DTS  +  band-pass filtering. Template matching  +  triangulation using DTS  +  band-pass filtered images could accurately determine the position of stationary lung tumors. However, triangulation was less accurate/reliable for targets with continuous, gradual displacement in the lateral and vertical directions. This technique is therefore currently most suited to detect/monitor offsets occurring between initial setup and the start of treatment, inter-breath-hold variations, and tumors with predominantly longitudinal motion.

  14. Space Radar Image of Moscow, Russia

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a vertically polarized L-band image of the southern half of Moscow, an area which has been inhabited for 2,000 years. The image covers a diameter of approximately 50 kilometers (31 miles) and was taken on September 30, 1994 by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar aboard the space shuttle Endeavour. The city of Moscow was founded about 750 years ago and today is home to about 8 million residents. The southern half of the circular highway (a road that looks like a ring) can easily be identified as well as the roads and railways radiating out from the center of the city. The city was named after the Moskwa River and replaced Russia's former capital, St. Petersburg, after the Russian Revolution in 1917. The river winding through Moscow shows up in various gray shades. The circular structure of many city roads can easily be identified, although subway connections covering several hundred kilometers are not visible in this image. The white areas within the ring road and outside of it are buildings of the city itself and it suburban towns. Two of many airports are located in the west and southeast of Moscow, near the corners of the image. The Kremlin is located north just outside of the imaged city center. It was actually built in the 16th century, when Ivan III was czar, and is famous for its various churches. In the surrounding area, light gray indicates forests, while the dark patches are agricultural areas. The various shades from middle gray to dark gray indicate different stages of harvesting, ploughing and grassland. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.V.(DLR), the major partner in science, operations and data processing of X-SAR.

  15. A thermophone on porous polymeric substrate

    NASA Astrophysics Data System (ADS)

    Chitnis, G.; Kim, A.; Song, S. H.; Jessop, A. M.; Bolton, J. S.; Ziaie, B.

    2012-07-01

    In this Letter, we present a simple, low-temperature method for fabricating a wide-band (>80 kHz) thermo-acoustic sound generator on a porous polymeric substrate. We were able to achieve up to 80 dB of sound pressure level with an input power of 0.511 W. No significant surface temperature increase was observed in the device even at an input power level of 2.5 W. Wide-band ultrasonic performance, simplicity of structure, and scalability of the fabrication process make this device suitable for many ranging and imaging applications.

  16. The Properties of Outer Retinal Band Three Investigated With Adaptive-Optics Optical Coherence Tomography.

    PubMed

    Jonnal, Ravi S; Gorczynska, Iwona; Migacz, Justin V; Azimipour, Mehdi; Zawadzki, Robert J; Werner, John S

    2017-09-01

    Optical coherence tomography's (OCT) third outer retinal band has been attributed to the zone of interdigitation between RPE cells and cone outer segments. The purpose of this paper is to investigate the structure of this band with adaptive optics (AO)-OCT. Using AO-OCT, images were obtained from two subjects. Axial structure was characterized by measuring band 3 thickness and separation between bands 2 and 3 in segmented cones. Lateral structure was characterized by correlation of band 3 with band 2 and comparison of their power spectra. Band thickness and separation were also measured in a clinical OCT image of one subject. Band 3 thickness ranged from 4.3 to 6.4 μm. Band 2 correlations ranged between 0.35 and 0.41 and power spectra of both bands confirmed peak frequencies that agree with histologic density measurements. In clinical images, band 3 thickness was between 14 and 19 μm. Measurements of AO-OCT of interband distance were lower than our corresponding clinical OCT measurements. Band 3 originates from a structure with axial extent similar to a single surface. Correlation with band 2 suggests an origin within the cone photoreceptor. These two observations indicate that band 3 corresponds predominantly to cone outer segment tips (COST). Conventional OCT may overestimate both the thickness of band 3 and outer segment length.

  17. The Properties of Outer Retinal Band Three Investigated With Adaptive-Optics Optical Coherence Tomography

    PubMed Central

    Jonnal, Ravi S.; Gorczynska, Iwona; Migacz, Justin V.; Azimipour, Mehdi; Zawadzki, Robert J.; Werner, John S.

    2017-01-01

    Purpose Optical coherence tomography's (OCT) third outer retinal band has been attributed to the zone of interdigitation between RPE cells and cone outer segments. The purpose of this paper is to investigate the structure of this band with adaptive optics (AO)-OCT. Methods Using AO-OCT, images were obtained from two subjects. Axial structure was characterized by measuring band 3 thickness and separation between bands 2 and 3 in segmented cones. Lateral structure was characterized by correlation of band 3 with band 2 and comparison of their power spectra. Band thickness and separation were also measured in a clinical OCT image of one subject. Results Band 3 thickness ranged from 4.3 to 6.4 μm. Band 2 correlations ranged between 0.35 and 0.41 and power spectra of both bands confirmed peak frequencies that agree with histologic density measurements. In clinical images, band 3 thickness was between 14 and 19 μm. Measurements of AO-OCT of interband distance were lower than our corresponding clinical OCT measurements. Conclusions Band 3 originates from a structure with axial extent similar to a single surface. Correlation with band 2 suggests an origin within the cone photoreceptor. These two observations indicate that band 3 corresponds predominantly to cone outer segment tips (COST). Conventional OCT may overestimate both the thickness of band 3 and outer segment length. PMID:28877320

  18. Method of improving a digital image

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J. (Inventor); Woodell, Glenn A. (Inventor); Rahman, Zia-ur (Inventor)

    1999-01-01

    A method of improving a digital image is provided. The image is initially represented by digital data indexed to represent positions on a display. The digital data is indicative of an intensity value I.sub.i (x,y) for each position (x,y) in each i-th spectral band. The intensity value for each position in each i-th spectral band is adjusted to generate an adjusted intensity value for each position in each i-th spectral band in accordance with ##EQU1## where S is the number of unique spectral bands included in said digital data, W.sub.n is a weighting factor and * denotes the convolution operator. Each surround function F.sub.n (x,y) is uniquely scaled to improve an aspect of the digital image, e.g., dynamic range compression, color constancy, and lightness rendition. The adjusted intensity value for each position in each i-th spectral band is filtered with a common function and then presented to a display device. For color images, a novel color restoration step is added to give the image true-to-life color that closely matches human observation.

  19. Simultaneous dual-band radar development

    NASA Technical Reports Server (NTRS)

    Liskow, C. L.

    1974-01-01

    Efforts to design and construct an airborne imaging radar operating simultaneously at L band and X band with an all-inertial navigation system in order to form a dual-band radar system are described. The areas of development include duplex transmitters, receivers, and recorders, a control module, motion compensation for both bands, and adaptation of a commercial inertial navigation system. Installation of the system in the aircraft and flight tests are described. Circuit diagrams, performance figures, and some radar images are presented.

  20. Techniques for Producing Coastal Land Water Masks from Landsat and Other Multispectral Satellite Data

    NASA Technical Reports Server (NTRS)

    Spruce, Joseph P.; Hall, Callie

    2005-01-01

    Coastal erosion and land loss continue to threaten many areas in the United States. Landsat data has been used to monitor regional coastal change since the 1970s. Many techniques can be used to produce coastal land water masks, including image classification and density slicing of individual bands or of band ratios. Band ratios used in land water detection include several variations of the Normalized Difference Water Index (NDWI). This poster discusses a study that compares land water masks computed from unsupervised Landsat image classification with masks from density-sliced band ratios and from the Landsat TM band 5. The greater New Orleans area is employed in this study, due to its abundance of coastal habitats and its vulnerability to coastal land loss. Image classification produced the best results based on visual comparison to higher resolution satellite and aerial image displays. However, density sliced NDWI imagery from either near infrared (NIR) and blue bands or from NIR and green bands also produced more effective land water masks than imagery from the density-sliced Landsat TM band 5. NDWI based on NIR and green bands is noteworthy because it allows land water masks to be generated from multispectral satellite sensors without a blue band (e.g., ASTER and Landsat MSS). NDWI techniques also have potential for producing land water masks from coarser scaled satellite data, such as MODIS.

  1. Techniques for Producing Coastal Land Water Masks from Landsat and Other Multispectral Satellite Data

    NASA Technical Reports Server (NTRS)

    Spruce, Joe; Hall, Callie

    2005-01-01

    Coastal erosion and land loss continue to threaten many areas in the United States. Landsat data has been used to monitor regional coastal change since the 1970's. Many techniques can be used to produce coastal land water masks, including image classification and density slicing of individual bands or of band ratios. Band ratios used in land water detection include several variations of the Normalized Difference Water Index (NDWI). This poster discusses a study that compares land water masks computed from unsupervised Landsat image classification with masks from density-sliced band ratios and from the Landsat TM band 5. The greater New Orleans area is imployed in this study, due to its abundance of coastal habitats and ist vulnerability to coastal land loss. Image classification produced the best results based on visual comparison to higher resolution satellite and aerial image displays. However, density-sliced NDWI imagery from either near infrared (NIR) and blue bands or from NIR and green bands also produced more effective land water masks than imagery from the density-sliced Landsat TM band 5. NDWI based on NIR and green bands is noteworthy because it allows land water masks to be generated form multispectral satellite sensors without a blue band (e.g., ASTER and Landsat MSS). NDWI techniques also have potential for producing land water masks from coarser scaled satellite data, such as MODIS.

  2. Thin-film optical pass band filters based on new photo-lithographic process for CaSSIS FPA detector on Exomars TGO mission: development, integration, and test

    NASA Astrophysics Data System (ADS)

    Gambicorti, L.; Piazza, D.; Gerber, M.; Pommerol, A.; Roloff, V.; Ziethe, R.; Zimmermann, C.; Da Deppo, V.; Cremonese, G.; Ficai Veltroni, I.; Marinai, M.; Di Carmine, E.; Bauer, T.; Moebius, P.; Thomas, N.

    2016-08-01

    A new technique based on photolithographic processes of thin-film optical pass band coatings on a monolithic substrate has been applied to the filters of the Focal Plane Assembly (FPA) of the Colour and Stereo Surface Imaging System (CaSSIS) that will fly onboard of the ExoMars Trace Gas Orbiter to be launched in March 2016 by ESA. The FPA including is one of the spare components of the Simbio-Sys instrument of the Italian Space Agency (ASI) that will fly on ESA's Bepi Colombo mission to Mercury. The detector, developed by Raytheon Vision Systems, is a 2kx2k hybrid Si-PIN array with a 10 μm pixel. The detector is housed within a block and has filters deposited directly on the entrance window. The window is a 1 mm thick monolithic plate of fused silica. The Filter Strip Assembly (FSA) is produced by Optics Balzers Jena GmbH and integrated on the focal plane by Leonardo-Finmeccanica SpA (under TAS-I responsibility). It is based on dielectric multilayer interference coatings, 4 colour bands selected with average in-band transmission greater than 95 percent within wavelength range (400-1100 nm), giving multispectral images on the same detector and thus allows CaSSIS to operate in push-frame mode. The Field of View (FOV) of each colour band on the detector is surrounded by a mask of low reflective chromium (LRC), which also provides with the straylight suppression required (an out-of-band transmission of less than 10-5/nm). The mask has been shown to deal effectively with cross-talk from multiple reflections between the detector surface and the filter. This paper shows the manufacturing and optical properties of the FSA filters and the FPA preliminary on-ground calibration results.

  3. Use of the SAR (Synthetic Aperture Radar) P band for detection of the Moche and Lambayeque canal networks in the Apurlec region, Perù

    NASA Astrophysics Data System (ADS)

    Ilaria Pannaccione Apa, Maria; Santovito, Maria Rosaria; Pica, Giulia; Catapano, Ilaria; Fornaro, Gianfranco; Lanari, Riccardo; Soldovieri, Francesco; Wester La Torre, Carlos; Fernandez Manayalle, Marco Antonio; Longo, Francesco; Facchinetti, Claudia; Formaro, Roberto

    2016-04-01

    In recent years, research attention has been devoted to the development of a new class of airborne radar systems using low frequency bands ranging from VHF/UHF to P and L ones. In this frame, the Italian Space Agency (ASI) has promoted the development of a new multi-mode and multi-band airborne radar system, which can be considered even a "proof-of-concept" for the next space-borne missions. In particular, in agreement with the ASI, the research consortium CO.RI.S.T.A. has in charge the design, development and flight validation of such a kind of system, which is the first airborne radar entirely built in Italy. The aim was to design and realize a radar system able to work in different modalities as: nadir-looking sounder at VHF band (163 MHz); side-looking imager (SAR) at P band with two channels at 450 MHz and 900 MHz. The P-band is a penetration radar. Exploiting penetration features of low frequency electromagnetic waves, dielectric discontinuities of observed scene due to inhomogeneous materials rise up and can be detected on the resulting image. Therefore buried objects or targets placed under vegetation may be detected. Penetration capabilities essentially depend on microwave frequency. Typically, penetration distance is inversely proportional to microwave frequency. The higher the frequency, the lower the penetration depth. Terrain characteristics affect penetration capabilities. Humidity acts as a shield to microwave penetration. Hence terrain with high water content are not good targets for P-band applicability. Science community, governments and space agencies have increased their interest about low frequency radar for their useful applicability in climatology, ecosystem monitoring, glaciology, archaeology. The combination of low frequency and high relative bandwidth of such a systems has a large applicability in both military and civilian applications, ranging from forestry applications, biomass measuring, archaeological and geological exploration, glaciers investigation, biomass monitoring, detection of buried targets. Its extension to non-civil application concerns sub-surface target detection and foliage penetration (FOPEN). In order to achieve the flexibility to face all the above mentioned fields of application, the CORISTA system has been designed as a multi-mode and multi-frequency radar. Multimode stands for the functionality of the system both as Sounder and Imager. In addition, P-band radar is a multi-frequency instrument, since it is designed to work in three different frequency bands, as mentioned above: lower frequency band is used in sounder operative mode, higher frequency in imager operative mode. In the Imager operative mode, low resolution and high resolution capabilities are implemented. The data collected by the radar system have been processed using a model-based microwave tomographic approach, recently developed by IREA-CNR, with the aim to enhance the interpretability of the raw-data radar images. Currently, the non-invasive SAR P band application is under evaluation for testing in the Northern Coast of Perù, in collaboration with the Museo Arqueológico Nacional Brüning. The project will aim to recognize the subsurface ancient Moche (100-700 d.C.) and Lambayeque (700-1375 d.C.) canal networks, whose water supply comes from the Canal Taymi, started to be dug by the Mochicas, still in use by local communities.

  4. Analysis of airborne imaging spectrometer data for the Ruby Mountains, Montana, by use of absorption-band-depth images

    NASA Technical Reports Server (NTRS)

    Brickey, David W.; Crowley, James K.; Rowan, Lawrence C.

    1987-01-01

    Airborne Imaging Spectrometer-1 (AIS-1) data were obtained for an area of amphibolite grade metamorphic rocks that have moderate rangeland vegetation cover. Although rock exposures are sparse and patchy at this site, soils are visible through the vegetation and typically comprise 20 to 30 percent of the surface area. Channel averaged low band depth images for diagnostic soil rock absorption bands. Sets of three such images were combined to produce color composite band depth images. This relative simple approach did not require extensive calibration efforts and was effective for discerning a number of spectrally distinctive rocks and soils, including soils having high talc concentrations. The results show that the high spectral and spatial resolution of AIS-1 and future sensors hold considerable promise for mapping mineral variations in soil, even in moderately vegetated areas.

  5. Blind Bayesian restoration of adaptive optics telescope images using generalized Gaussian Markov random field models

    NASA Astrophysics Data System (ADS)

    Jeffs, Brian D.; Christou, Julian C.

    1998-09-01

    This paper addresses post processing for resolution enhancement of sequences of short exposure adaptive optics (AO) images of space objects. The unknown residual blur is removed using Bayesian maximum a posteriori blind image restoration techniques. In the problem formulation, both the true image and the unknown blur psf's are represented by the flexible generalized Gaussian Markov random field (GGMRF) model. The GGMRF probability density function provides a natural mechanism for expressing available prior information about the image and blur. Incorporating such prior knowledge in the deconvolution optimization is crucial for the success of blind restoration algorithms. For example, space objects often contain sharp edge boundaries and geometric structures, while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits smoothed, random , texture-like features on a peaked central core. By properly choosing parameters, GGMRF models can accurately represent both the blur psf and the object, and serve to regularize the deconvolution problem. These two GGMRF models also serve as discriminator functions to separate blur and object in the solution. Algorithm performance is demonstrated with examples from synthetic AO images. Results indicate significant resolution enhancement when applied to partially corrected AO images. An efficient computational algorithm is described.

  6. A simple method for the detection of PM2.5 air pollutions using MODIS data

    NASA Astrophysics Data System (ADS)

    Kato, Yoshinobu

    2016-05-01

    In recent years, PM2.5 air pollution is a social and transboundary environmental issue with the rapid economic growth in many countries. As PM2.5 is small and includes various ingredients, the detection of PM2.5 air pollutions by using satellite data is difficult compared with the detection of dust and sandstorms. In this paper, we examine various images (i.e., single-band images, band-difference images, RGB composite color images) to find a good method for detecting PM2.5 air pollutions by using MODIS data. A good method for the detection of PM2.5 air pollution is {R, G, B = band10, band9, T11}, where T11 is the brightness temperature of band31. In this composite color image, PM2.5 air pollutions are represented by light purple or pink color. This proposed method is simpler than the method by Nagatani et al. (2013), and is useful to grasp the distribution of PM2.5 air pollutions in the wide area (e.g., from China and India to Japan). By comparing AVI image with the image by proposed method, DSS and PM2.5 air pollutions can be classified.

  7. DEEP U BAND AND R IMAGING OF GOODS-SOUTH: OBSERVATIONS, DATA REDUCTION AND FIRST RESULTS ,

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nonino, M.; Cristiani, S.; Vanzella, E.

    2009-08-01

    We present deep imaging in the U band covering an area of 630 arcmin{sup 2} centered on the southern field of the Great Observatories Origins Deep Survey (GOODS). The data were obtained with the VIMOS instrument at the European Southern Observatory (ESO) Very Large Telescope. The final images reach a magnitude limit U {sub lim} {approx} 29.8 (AB, 1{sigma}, in a 1'' radius aperture), and have good image quality, with full width at half-maximum {approx}0.''8. They are significantly deeper than previous U-band images available for the GOODS fields, and better match the sensitivity of other multiwavelength GOODS photometry. The deepermore » U-band data yield significantly improved photometric redshifts, especially in key redshift ranges such as 2 < z < 4, and deeper color-selected galaxy samples, e.g., Lyman break galaxies at z {approx} 3. We also present the co-addition of archival ESO VIMOS R-band data, with R {sub lim} {approx} 29 (AB, 1{sigma}, 1'' radius aperture), and image quality {approx}0.''75. We discuss the strategies for the observations and data reduction, and present the first results from the analysis of the co-added images.« less

  8. Hyperspectral interventional imaging for enhanced tissue visualization and discrimination combining band selection methods.

    PubMed

    Nouri, Dorra; Lucas, Yves; Treuillet, Sylvie

    2016-12-01

    Hyperspectral imaging is an emerging technology recently introduced in medical applications inasmuch as it provides a powerful tool for noninvasive tissue characterization. In this context, a new system was designed to be easily integrated in the operating room in order to detect anatomical tissues hardly noticed by the surgeon's naked eye. Our LCTF-based spectral imaging system is operative over visible, near- and middle-infrared spectral ranges (400-1700 nm). It is dedicated to enhance critical biological tissues such as the ureter and the facial nerve. We aim to find the best three relevant bands to create a RGB image to display during the intervention with maximal contrast between the target tissue and its surroundings. A comparative study is carried out between band selection methods and band transformation methods. Combined band selection methods are proposed. All methods are compared using different evaluation criteria. Experimental results show that the proposed combined band selection methods provide the best performance with rich information, high tissue separability and short computational time. These methods yield a significant discrimination between biological tissues. We developed a hyperspectral imaging system in order to enhance some biological tissue visualization. The proposed methods provided an acceptable trade-off between the evaluation criteria especially in SWIR spectral band that outperforms the naked eye's capacities.

  9. A dual-band adaptor for infrared imaging.

    PubMed

    McLean, A G; Ahn, J-W; Maingi, R; Gray, T K; Roquemore, A L

    2012-05-01

    A novel imaging adaptor providing the capability to extend a standard single-band infrared (IR) camera into a two-color or dual-band device has been developed for application to high-speed IR thermography on the National Spherical Tokamak Experiment (NSTX). Temperature measurement with two-band infrared imaging has the advantage of being mostly independent of surface emissivity, which may vary significantly in the liquid lithium divertor installed on NSTX as compared to that of an all-carbon first wall. In order to take advantage of the high-speed capability of the existing IR camera at NSTX (1.6-6.2 kHz frame rate), a commercial visible-range optical splitter was extensively modified to operate in the medium wavelength and long wavelength IR. This two-band IR adapter utilizes a dichroic beamsplitter, which reflects 4-6 μm wavelengths and transmits 7-10 μm wavelength radiation, each with >95% efficiency and projects each IR channel image side-by-side on the camera's detector. Cutoff filters are used in each IR channel, and ZnSe imaging optics and mirrors optimized for broadband IR use are incorporated into the design. In-situ and ex-situ temperature calibration and preliminary data of the NSTX divertor during plasma discharges are presented, with contrasting results for dual-band vs. single-band IR operation.

  10. A technique for the reduction of banding in Landsat Thematic Mapper Images

    USGS Publications Warehouse

    Helder, Dennis L.; Quirk, Bruce K.; Hood, Joy J.

    1992-01-01

    The radiometric difference between forward and reverse scans in Landsat thematic mapper (TM) images, referred to as "banding," can create problems when enhancing the image for interpretation or when performing quantitative studies. Recent research has led to the development of a method that reduces the banding in Landsat TM data sets. It involves passing a one-dimensional spatial kernel over the data set. This kernel is developed from the statistics of the banding pattern and is based on the Wiener filter. It has been implemented on both a DOS-based microcomputer and several UNIX-based computer systems. The algorithm has successfully reduced the banding in several test data sets.

  11. Suppression of vegetation in LANDSAT ETM+ remote sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Le; Porwal, Alok; Holden, Eun-Jung; Dentith, Michael

    2010-05-01

    Vegetation cover is an impediment to the interpretation of multispectral remote sensing images for geological applications, especially in densely vegetated terrains. In order to enhance the underlying geological information in such terrains, it is desirable to suppress the reflectance component of vegetation. One form of spectral unmixing that has been successfully used for vegetation reflectance suppression in multispectral images is called "forced invariance". It is based on segregating components of the reflectance spectrum that are invariant with respect to a specific spectral index such as the NDVI. The forced invariance method uses algorithms such as software defoliation. However, the outputs of software defoliation are single channel data, which are not amenable to geological interpretations. Crippen and Blom (2001) proposed a new forced invariance algorithm that utilizes band statistics, rather than band ratios. The authors demonstrated the effectiveness of their algorithms on a LANDSAT TM scene from Nevada, USA, especially in open canopy areas in mixed and semi-arid terrains. In this presentation, we report the results of our experimentation with this algorithm on a densely to sparsely vegetated Landsat ETM+ scene. We selected a scene (Path 119, Row 39) acquired on 18th July, 2004. Two study areas located around the city of Hangzhou, eastern China were tested. One of them covers uninhabited hilly terrain characterized by low rugged topography, parts of the hills are densely vegetated; another one covers both inhabited urban areas and uninhabited hilly terrain, which is densely vegetated. Crippen and Blom's algorithm is implemented in the following sequential steps: (1) dark pixel correction; (2) vegetation index calculation; (3) estimation of statistical relationship between vegetation index and digital number (DN) values for each band; (4) calculation of a smooth best-fit curve for the above relationships; and finally, (5) selection of a target average DN value and scaling all pixels at each vegetation index level by an amount that shifts the curve to the target digital number (DN). The main drawback of their algorithm is severe distortions of the DN values of non-vegetated areas, a suggested solution is masking outliers such as cloud, water, etc. We therefore extend this algorithm by masking non-vegetated areas. Our algorithm comprises the following three steps: (1) masking of barren or sparsely vegetated areas using a threshold based on a vegetation index that is calculated after atmosphere correction (dark pixel correction and ACTOR were compared) in order to conserve their original spectral information through the subsequent processing; (2) applying Crippen and Blom's forced invariance algorithm to suppress the spectral response of vegetation only in vegetated areas; and (3) combining the processed vegetated areas with the masked barren or sparsely vegetated areas followed by histogram equalization to eliminate the differences in color-scales between these two types of areas, and enhance the integrated image. The output images of both study areas showed significant improvement over the original images in terms of suppression of vegetation reflectance and enhancement of the underlying geological information. The processed images show clear banding, probably associated with lithological variations in the underlying rock formations. The colors of non-vegetated pixels are distorted in the unmasked results but in the same location the pixels in the masked results show regions of higher contrast. We conclude that the algorithm offers an effective way to enhance geological information in LANDSAT TM/ETM+ images of terrains with significant vegetation cover. It is also suitable to other multispectral satellite data have bands in similar wavelength regions. In addition, an application of this method to hyperspectral data may be possible as long as it can provide the vegetation band ratios.

  12. Space Radar Image of Manaus, Brazil

    NASA Technical Reports Server (NTRS)

    1994-01-01

    These two false-color images of the Manaus region of Brazil in South America were acquired by the Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar on board the space shuttle Endeavour. The image at left was acquired on April 12, 1994, and the image at right was acquired on October 3, 1994. The area shown is approximately 8 kilometers by 40 kilometers (5 miles by 25 miles). The two large rivers in this image, the Rio Negro (at top) and the Rio Solimoes (at bottom), combine at Manaus (west of the image) to form the Amazon River. The image is centered at about 3 degrees south latitude and 61 degrees west longitude. North is toward the top left of the images. The false colors were created by displaying three L-band polarization channels: red areas correspond to high backscatter, horizontally transmitted and received, while green areas correspond to high backscatter, horizontally transmitted and vertically received. Blue areas show low returns at vertical transmit/receive polarization; hence the bright blue colors of the smooth river surfaces can be seen. Using this color scheme, green areas in the image are heavily forested, while blue areas are either cleared forest or open water. The yellow and red areas are flooded forest or floating meadows. The extent of the flooding is much greater in the April image than in the October image and appears to follow the 10-meter (33-foot) annual rise and fall of the Amazon River. The flooded forest is a vital habitat for fish, and floating meadows are an important source of atmospheric methane. These images demonstrate the capability of SIR-C/X-SAR to study important environmental changes that are impossible to see with optical sensors over regions such as the Amazon, where frequent cloud cover and dense forest canopies block monitoring of flooding. Field studies by boat, on foot and in low-flying aircraft by the University of California at Santa Barbara, in collaboration with Brazil's Instituto Nacional de Pesguisas Estaciais, during the first and second flights of the SIR-C/X-SAR system have validated the interpretation of the radar images. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.V.(DLR), the major partner in science, operations and data processing of X-SAR.

  13. ARC-1989-A89-7045

    NASA Image and Video Library

    1989-08-26

    Range : 280,000 km. ( 170,000 miles ) P-34726 BW Two 10 minute exposures of Neptune's rings clearly show the two main rings , as well as the inner faint ring and the faint band that extends planetward from roughly halfway between the two bright rings. Both bright rings have material throughout their entire orbit, and are therefore continuous. The inner ring and the broad band are much fainter than the two narrow main rings. These images were taken 1 hour and 27 minutes aprt, using the clear filter on Voyager 2's wide angle camera. These long exposures images were taken while the rings were backlit by the sun. This viewing geometry enhances the visibility of dust and allows optically thinner parts of the rings to be seen. The bright glare in the center is due to overexposure of the crescent of Neptune . The two gaps in the upper part of the outer ring in the image on the left are due to the removal of blemishes during computer processing of the images. Numerous bright stars are evident in the background.

  14. Shallow sea-floor reflectance and water depth derived by unmixing multispectral imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bierwirth, P.N.; Lee, T.J.; Burne, R.V.

    1993-03-01

    A major problem for mapping shallow water zones by the analysis of remotely sensed data is that contrast effects due to water depth obscure and distort the special nature of the substrate. This paper outlines a new method which unmixes the exponential influence of depth in each pixel by employing a mathematical constraint. This leaves a multispectral residual which represents relative substrate reflectance. Input to the process are the raw multispectral data and water attenuation coefficients derived by the co-analysis of known bathymetry and remotely sensed data. Outputs are substrate-reflectance images corresponding to the input bands and a greyscale depthmore » image. The method has been applied in the analysis of Landsat TM data at Hamelin Pool in Shark Bay, Western Australia. Algorithm derived substrate reflectance images for Landsat TM bands 1, 2, and 3 combined in color represent the optimum enhancement for mapping or classifying substrate types. As a result, this color image successfully delineated features, which were obscured in the raw data, such as the distributions of sea-grasses, microbial mats, and sandy area. 19 refs.« less

  15. Three frequency false-color image of Prince Albert, Canada

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-frequency, false color image of Prince Albert, Canada, centered at 53.91 north latitude and 104.69 west longitude. It was produced using data from the X-band, C-band and L-band radars that comprise the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR). SIR-C/X-SAR acquired this image on the 20th orbit of the Shuttle Endeavour. The area is located 40 km north and 30 km east of the town of Prince Albert in the Saskatchewan province of Canada. The image covers the area east of the Candle Lake, between gravel surface highways 120 and 106 and west of 106. The area in the middle of the image covers the entire Nipawin (Narrow Hills) provincial park. Most of the dark blue areas in the image are the ice covered lakes. The dark area on the top right corner of the image is the White Gull Lake north of the intersection of highway 120 and 913. The right middle part of the image shows Lake Ispuchaw and Lower Fishing Lake. The deforested areas are shown by light

  16. Contrast based band selection for optimized weathered oil detection in hyperspectral images

    NASA Astrophysics Data System (ADS)

    Levaux, Florian; Bostater, Charles R., Jr.; Neyt, Xavier

    2012-09-01

    Hyperspectral imagery offers unique benefits for detection of land and water features due to the information contained in reflectance signatures such as the bi-directional reflectance distribution function or BRDF. The reflectance signature directly shows the relative absorption and backscattering features of targets. These features can be very useful in shoreline monitoring or surveillance applications, for example to detect weathered oil. In real-time detection applications, processing of hyperspectral data can be an important tool and Optimal band selection is thus important in real time applications in order to select the essential bands using the absorption and backscatter information. In the present paper, band selection is based upon the optimization of target detection using contrast algorithms. The common definition of the contrast (using only one band out of all possible combinations available within a hyperspectral image) is generalized in order to consider all the possible combinations of wavelength dependent contrasts using hyperspectral images. The inflection (defined here as an approximation of the second derivative) is also used in order to enhance the variations in the reflectance spectra as well as in the contrast spectrua in order to assist in optimal band selection. The results of the selection in term of target detection (false alarms and missed detection) are also compared with a previous method to perform feature detection, namely the matched filter. In this paper, imagery is acquired using a pushbroom hyperspectral sensor mounted at the bow of a small vessel. The sensor is mechanically rotated using an optical rotation stage. This opto-mechanical scanning system produces hyperspectral images with pixel sizes on the order of mm to cm scales, depending upon the distance between the sensor and the shoreline being monitored. The motion of the platform during the acquisition induces distortions in the collected HSI imagery. It is therefore necessary to apply a motion correction to the imagery. In this paper, imagery is corrected for the pitching motion of a vessel, which causes most of the deformation when the vessel is anchored at 2 points (bow and stern) during the acquisition of the hyperspectral imagry.

  17. Background correction in forensic photography. I. Photography of blood under conditions of non-uniform illumination or variable substrate color--theoretical aspects and proof of concept.

    PubMed

    Wagner, John H; Miskelly, Gordon M

    2003-05-01

    The combination of photographs taken at two or three wavelengths at and bracketing an absorbance peak indicative of a particular compound can lead to an image with enhanced visualization of the compound. This procedure works best for compounds with absorbance bands that are narrow compared with "average" chromophores. If necessary, the photographs can be taken with different exposure times to ensure that sufficient light from the substrate is detected at all three wavelengths. The combination of images is readily performed if the images are obtained with a digital camera and are then processed using an image processing program. Best results are obtained if linear images at the peak maximum, at a slightly shorter wavelength, and at a slightly longer wavelength are used. However, acceptable results can also be obtained under many conditions if non-linear photographs are used or if only two wavelengths (one of which is at the peak maximum) are combined. These latter conditions are more achievable by many "mid-range" digital cameras. Wavelength selection can either be by controlling the illumination (e.g., by using an alternate light source) or by use of narrow bandpass filters. The technique is illustrated using blood as the target analyte, using bands of light centered at 395, 415, and 435 nm. The extension of the method to detection of blood by fluorescence quenching is also described.

  18. Hyperspectral image segmentation using a cooperative nonparametric approach

    NASA Astrophysics Data System (ADS)

    Taher, Akar; Chehdi, Kacem; Cariou, Claude

    2013-10-01

    In this paper a new unsupervised nonparametric cooperative and adaptive hyperspectral image segmentation approach is presented. The hyperspectral images are partitioned band by band in parallel and intermediate classification results are evaluated and fused, to get the final segmentation result. Two unsupervised nonparametric segmentation methods are used in parallel cooperation, namely the Fuzzy C-means (FCM) method, and the Linde-Buzo-Gray (LBG) algorithm, to segment each band of the image. The originality of the approach relies firstly on its local adaptation to the type of regions in an image (textured, non-textured), and secondly on the introduction of several levels of evaluation and validation of intermediate segmentation results before obtaining the final partitioning of the image. For the management of similar or conflicting results issued from the two classification methods, we gradually introduced various assessment steps that exploit the information of each spectral band and its adjacent bands, and finally the information of all the spectral bands. In our approach, the detected textured and non-textured regions are treated separately from feature extraction step, up to the final classification results. This approach was first evaluated on a large number of monocomponent images constructed from the Brodatz album. Then it was evaluated on two real applications using a respectively multispectral image for Cedar trees detection in the region of Baabdat (Lebanon) and a hyperspectral image for identification of invasive and non invasive vegetation in the region of Cieza (Spain). A correct classification rate (CCR) for the first application is over 97% and for the second application the average correct classification rate (ACCR) is over 99%.

  19. A sampling procedure to guide the collection of narrow-band, high-resolution spatially and spectrally representative reflectance data. [satellite imagery of earth resources

    NASA Technical Reports Server (NTRS)

    Brand, R. R.; Barker, J. L.

    1983-01-01

    A multistage sampling procedure using image processing, geographical information systems, and analytical photogrammetry is presented which can be used to guide the collection of representative, high-resolution spectra and discrete reflectance targets for future satellite sensors. The procedure is general and can be adapted to characterize areas as small as minor watersheds and as large as multistate regions. Beginning with a user-determined study area, successive reductions in size and spectral variation are performed using image analysis techniques on data from the Multispectral Scanner, orbital and simulated Thematic Mapper, low altitude photography synchronized with the simulator, and associated digital data. An integrated image-based geographical information system supports processing requirements.

  20. Duodenal villous morphology assessed using magnification narrow band imaging correlates well with histology in patients with suspected malabsorption syndrome.

    PubMed

    Dutta, Amit Kumar; Sajith, Kattiparambil Gangadharan; Shah, Gautam; Pulimood, Anna Benjamin; Simon, Ebby George; Joseph, Anjilivelil Joseph; Chacko, Ashok

    2014-11-01

    Narrow band imaging with magnification enables detailed assessment of duodenal villi and may be useful in predicting the presence of villous atrophy or normal villi. We aimed to assess the morphology of duodenal villi using magnification narrow band imaging and correlate it with histology findings in patients with clinically suspected malabsorption syndrome. Patients with clinical suspicion of malabsorption presenting at a tertiary care center were prospectively recruited in this diagnostic intervention study. Patients underwent upper gastrointestinal endoscopy using magnification narrow band imaging. The villous morphology in the second part of the duodenum was assessed independently by two endoscopists and the presence of normal or atrophic villi was recorded. Biopsy specimen was obtained from the same area and was examined by two pathologists together. The sensitivity and specificity of magnification narrow band imaging in detecting the presence of duodenal villous atrophy was calculated and compared to the histology. One hundred patients with clinically suspected malabsorption were included in this study. Sixteen patients had histologically confirmed villous atrophy. The sensitivity and specificity of narrow band imaging in predicting villous atrophy was 87.5% and 95.2%, respectively, for one endoscopist. The corresponding figures for the second endoscopist were 81.3% and 92.9%, respectively. The interobserver agreement was very good with a kappa value of 0.87. Magnification narrow band imaging performed very well in predicting duodenal villous morphology. This may help in carrying out targeted biopsies and avoiding unnecessary biopsies in patients with suspected malabsorption. © 2014 The Authors. Digestive Endoscopy © 2014 Japan Gastroenterological Endoscopy Society.

  1. MACSAT - A Near Equatorial Earth Observation Mission

    NASA Astrophysics Data System (ADS)

    Kim, B. J.; Park, S.; Kim, E.-E.; Park, W.; Chang, H.; Seon, J.

    MACSAT mission was initiated by Malaysia to launch a high-resolution remote sensing satellite into Near Equatorial Orbit (NEO). Due to its geographical location, Malaysia can have large benefits from NEO satellite operation. From the baseline circular orbit of 685 km altitude with 7 degrees of inclination, the neighboring regions around Malaysian territory can be frequently monitored. The equatorial environment around the globe can also be regularly observed with unique revisit characteristics. The primary mission objective of MACSAT program is to develop and validate technologies for a near equatorial orbit remote sensing satellite system. MACSAT is optimally designed to accommodate an electro-optic Earth observation payload, Medium-sized Aperture Camera (MAC). Malaysian and Korean joint engineering teams are formed for the effective implementation of the satellite system. An integrated team approach is adopted for the joint development for MACSAT. MAC is a pushbroom type camera with 2.5 m of Ground Sampling Distance (GSD) in panchromatic band and 5 m of GSD in four multi-spectral bands. The satellite platform is a mini-class satellite. Including MAC payload, the satellite weighs under 200 kg. Spacecraft bus is designed optimally to support payload operations during 3 years of mission life. The payload has 20 km of swath width with +/- 30 o of tilting capability. 32 Gbits of solid state recorder is implemented as the mass image storage. The ground element is an integrated ground station for mission control and payload operation. It is equipped with S- band up/down link for commanding and telemetry reception as well as 30 Mbps class X-band down link for image reception and processing. The MACSAT system is capable of generating 1:25,000-scale image maps. It is also anticipated to have capability for cross-track stereo imaging for Digital elevation Model (DEM) generation.

  2. The next Landsat satellite; the Landsat Data Continuity Mission

    USGS Publications Warehouse

    Irons, James R.; Dwyer, John L.; Barsi, Julia A.

    2012-01-01

    The National Aeronautics and Space Administration (NASA) and the Department of Interior United States Geological Survey (USGS) are developing the successor mission to Landsat 7 that is currently known as the Landsat Data Continuity Mission (LDCM). NASA is responsible for building and launching the LDCM satellite observatory. USGS is building the ground system and will assume responsibility for satellite operations and for collecting, archiving, and distributing data following launch. The observatory will consist of a spacecraft in low-Earth orbit with a two-sensor payload. One sensor, the Operational Land Imager (OLI), will collect image data for nine shortwave spectral bands over a 185 km swath with a 30 m spatial resolution for all bands except a 15 m panchromatic band. The other instrument, the Thermal Infrared Sensor (TIRS), will collect image data for two thermal bands with a 100 m resolution over a 185 km swath. Both sensors offer technical advancements over earlier Landsat instruments. OLI and TIRS will coincidently collect data and the observatory will transmit the data to the ground system where it will be archived, processed to Level 1 data products containing well calibrated and co-registered OLI and TIRS data, and made available for free distribution to the general public. The LDCM development is on schedule for a December 2012 launch. The USGS intends to rename the satellite "Landsat 8" following launch. By either name a successful mission will fulfill a mandate for Landsat data continuity. The mission will extend the almost 40-year Landsat data archive with images sufficiently consistent with data from the earlier missions to allow long-term studies of regional and global land cover change.

  3. Spectral Reconstruction for Obtaining Virtual Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    Perez, G. J. P.; Castro, E. C.

    2016-12-01

    Hyperspectral sensors demonstrated its capabalities in identifying materials and detecting processes in a satellite scene. However, availability of hyperspectral images are limited due to the high development cost of these sensors. Currently, most of the readily available data are from multi-spectral instruments. Spectral reconstruction is an alternative method to address the need for hyperspectral information. The spectral reconstruction technique has been shown to provide a quick and accurate detection of defects in an integrated circuit, recovers damaged parts of frescoes, and it also aids in converting a microscope into an imaging spectrometer. By using several spectral bands together with a spectral library, a spectrum acquired by a sensor can be expressed as a linear superposition of elementary signals. In this study, spectral reconstruction is used to estimate the spectra of different surfaces imaged by Landsat 8. Four atmospherically corrected surface reflectance from three visible bands (499 nm, 585 nm, 670 nm) and one near-infrared band (872 nm) of Landsat 8, and a spectral library of ground elements acquired from the United States Geological Survey (USGS) are used. The spectral library is limited to 420-1020 nm spectral range, and is interpolated at one nanometer resolution. Singular Value Decomposition (SVD) is used to calculate the basis spectra, which are then applied to reconstruct the spectrum. The spectral reconstruction is applied for test cases within the library consisting of vegetation communities. This technique was successful in reconstructing a hyperspectral signal with error of less than 12% for most of the test cases. Hence, this study demonstrated the potential of simulating information at any desired wavelength, creating a virtual hyperspectral sensor without the need for additional satellite bands.

  4. The role of intraoperative narrow-band imaging in transoral laser microsurgery for early and moderately advanced glottic cancer.

    PubMed

    Klimza, Hanna; Jackowska, Joanna; Piazza, Cesare; Banaszewski, Jacek; Wierzbicka, Malgorzata

    2018-03-01

    Trans-oral laser microsurgery is an established technique for the treatment of early and moderately advanced laryngeal cancer. The authors intend to test the usefulness of narrow-band imaging in the intraoperative assessment of the larynx mucosa in terms of specifying surgical margins. Forty-four consecutive T1-T2 glottic cancers treated with trans-oral laser microsurgery Type I-VI cordectomy were presented. Suspected areas (90 samples/44 patients) were biopsied under the guidance of narrow-band imaging and white light and sent for frozen section. Our study revealed that 75 of 90 (83.3%) white light and narrow-band imaging-guided samples were histopathologically positive: 30 (40%) were confirmed as carcinoma in situ or invasive carcinoma and 45 (60%) as moderate to severe dysplasia. In 6 patients mucosa was suspected only in narrow-band imaging, with no suspicion under white light. Thus, in these 6 patients 18/90 (20%) samples were taken. In 5/6 patients 16/18 (88.8%) samples were positive in frozen section: in 6/18 (33.3%) carcinoma (2 patients), 10/18 (66.6%) severe dysplasia was confirmed (3 patients). In 1 patient 2/18 (11.1%) samples were negative in frozen section. Presented analysis showed, that sensitivity, specificity and accuracy of white light was 79.5%, 20% and 71.1% respectively, while narrow-band imaging was 100%, 0.0% and 85.7%, respectively. The intraoperative use of narrow-band imaging proved to be valuable in the visualization of suspect areas of the mucosa. Narrow-band imaging confirms the suspicions undertaken in white light and importantly, it showed microlesions beyond the scope of white light. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  5. Remote sensing. [land use mapping

    NASA Technical Reports Server (NTRS)

    Jinich, A.

    1979-01-01

    Various imaging techniques are outlined for use in mapping, land use, and land management in Mexico. Among the techniques discussed are pattern recognition and photographic processing. The utilization of information from remote sensing devices on satellites are studied. Multispectral band scanners are examined and software, hardware, and other program requirements are surveyed.

  6. SVM-based feature extraction and classification of aflatoxin contaminated corn using fluorescence hyperspectral data

    USDA-ARS?s Scientific Manuscript database

    Support Vector Machine (SVM) was used in the Genetic Algorithms (GA) process to select and classify a subset of hyperspectral image bands. The method was applied to fluorescence hyperspectral data for the detection of aflatoxin contamination in Aspergillus flavus infected single corn kernels. In the...

  7. Emirates eXploration Imager (EXI) Overview from the Emirates Mars Mission

    NASA Astrophysics Data System (ADS)

    Al Shamsi, M. R.; Wolff, M. J.; Jones, A. R.; Khoory, M. A.; Osterloo, M. M.; AlMheiri, S.; Reed, H.; Drake, G.

    2017-12-01

    The Emirates eXploration Imager (EXI) instrument is one of three scientific instruments abroad the Emirate Mars Mission (EMM) spacecraft, "Hope". The planned launch window opens in the summer of 2020, with the goal of this United Arab Emirates (UAE) mission to explore the dynamics of the Martian atmosphere through global spatial sampling which includes both diurnal and seasonal timescales. A particular focus of the mission is the improvement of our understanding of the global circulation in the lower atmosphere and the connections to the upward transport of energy of the escaping atmospheric particles from the upper atmosphere. This will be accomplished using three unique and complementary scientific instruments. The subject of this presentation, EXI, is a multi-band, camera capable of taking 12 megapixel images, which translates to a spatial resolution of better than 8 km with a well calibrated radiometric performance. EXI uses a selector wheel mechanism consisting of 6 discrete bandpass filters to sample the optical spectral region: 3 UV bands and 3 visible (RGB) bands. Atmospheric characterization will involve the retrieval of the ice optical depth using the 300-340 nm band, the dust optical depth in the 205-235nm range, and the column abundance of ozone with a band covering 245-275 nm. Radiometric fidelity is optimized while simplifying the optical design by separating the UV and VIS optical paths. The instrument is being developed jointly by the Laboratory for Atmospheric and Space Physics (LASP), University of California, Boulder, USA, and Mohammed Bin Rashid Space Centre (MBRSC), Dubai, UAE. The development of analysis software (reduction and retrieval) is being enabled through an EXI Observation Simulator. This package will produce EXI-like images using a combination of realistic viewing geometry (NAIF and a "reference trajectory") and simulated radiance values that include relevant atmospheric conditions and properties (Global Climate Model, DISORT). These noiseless images can then have instrument effects added (e.g., read-noise, dark current, pixel sensitivity, etc) to allow for the direct testing of data compression schemes, calibration pipeline processing, and atmospheric retrievals.

  8. Segmentation of prostate from ultrasound images using level sets on active band and intensity variation across edges.

    PubMed

    Li, Xu; Li, Chunming; Fedorov, Andriy; Kapur, Tina; Yang, Xiaoping

    2016-06-01

    In this paper, the authors propose a novel efficient method to segment ultrasound images of the prostate with weak boundaries. Segmentation of the prostate from ultrasound images with weak boundaries widely exists in clinical applications. One of the most typical examples is the diagnosis and treatment of prostate cancer. Accurate segmentation of the prostate boundaries from ultrasound images plays an important role in many prostate-related applications such as the accurate placement of the biopsy needles, the assignment of the appropriate therapy in cancer treatment, and the measurement of the prostate volume. Ultrasound images of the prostate are usually corrupted with intensity inhomogeneities, weak boundaries, and unwanted edges, which make the segmentation of the prostate an inherently difficult task. Regarding to these difficulties, the authors introduce an active band term and an edge descriptor term in the modified level set energy functional. The active band term is to deal with intensity inhomogeneities and the edge descriptor term is to capture the weak boundaries or to rule out unwanted boundaries. The level set function of the proposed model is updated in a band region around the zero level set which the authors call it an active band. The active band restricts the authors' method to utilize the local image information in a banded region around the prostate contour. Compared to traditional level set methods, the average intensities inside∖outside the zero level set are only computed in this banded region. Thus, only pixels in the active band have influence on the evolution of the level set. For weak boundaries, they are hard to be distinguished by human eyes, but in local patches in the band region around prostate boundaries, they are easier to be detected. The authors incorporate an edge descriptor to calculate the total intensity variation in a local patch paralleled to the normal direction of the zero level set, which can detect weak boundaries and avoid unwanted edges in the ultrasound images. The efficiency of the proposed model is demonstrated by experiments on real 3D volume images and 2D ultrasound images and comparisons with other approaches. Validation results on real 3D TRUS prostate images show that the authors' model can obtain a Dice similarity coefficient (DSC) of 94.03% ± 1.50% and a sensitivity of 93.16% ± 2.30%. Experiments on 100 typical 2D ultrasound images show that the authors' method can obtain a sensitivity of 94.87% ± 1.85% and a DSC of 95.82% ± 2.23%. A reproducibility experiment is done to evaluate the robustness of the proposed model. As far as the authors know, prostate segmentation from ultrasound images with weak boundaries and unwanted edges is a difficult task. A novel method using level sets with active band and the intensity variation across edges is proposed in this paper. Extensive experimental results demonstrate that the proposed method is more efficient and accurate.

  9. Space Radar Image of Manaus, Brazil

    NASA Image and Video Library

    1999-01-27

    This false-color L-band image of the Manaus region of Brazil was acquired by NASA Spaceborne Imaging Radar-C and X-Band Synthetic Aperture Radar SIR-C/X-SAR aboard the space shuttle Endeavour on orbit 46 of the mission.

  10. A Minimum Spanning Forest Based Method for Noninvasive Cancer Detection with Hyperspectral Imaging

    PubMed Central

    Pike, Robert; Lu, Guolan; Wang, Dongsheng; Chen, Zhuo Georgia; Fei, Baowei

    2016-01-01

    Goal The purpose of this paper is to develop a classification method that combines both spectral and spatial information for distinguishing cancer from healthy tissue on hyperspectral images in an animal model. Methods An automated algorithm based on a minimum spanning forest (MSF) and optimal band selection has been proposed to classify healthy and cancerous tissue on hyperspectral images. A support vector machine (SVM) classifier is trained to create a pixel-wise classification probability map of cancerous and healthy tissue. This map is then used to identify markers that are used to compute mutual information for a range of bands in the hyperspectral image and thus select the optimal bands. An MSF is finally grown to segment the image using spatial and spectral information. Conclusion The MSF based method with automatically selected bands proved to be accurate in determining the tumor boundary on hyperspectral images. Significance Hyperspectral imaging combined with the proposed classification technique has the potential to provide a noninvasive tool for cancer detection. PMID:26285052

  11. Plasma Treatment to Remove Carbon from Indium UV Filters

    NASA Technical Reports Server (NTRS)

    Greer, Harold F.; Nikzad, Shouleh; Beasley, Matthew; Gantner, Brennan

    2012-01-01

    The sounding rocket experiment FIRE (Far-ultraviolet Imaging Rocket Experiment) will improve the science community fs ability to image a spectral region hitherto unexplored astronomically. The imaging band of FIRE (.900 to 1,100 Angstroms) will help fill the current wavelength imaging observation hole existing from approximately equal to 620 Angstroms to the GALEX band near 1,350 Angstroms. FIRE is a single-optic prime focus telescope with a 1.75-m focal length. The bandpass of 900 to 1100 Angstroms is set by a combination of the mirror coating, the indium filter in front of the detector, and the salt coating on the front of the detector fs microchannel plates. Critical to this is the indium filter that must reduce the flux from Lymanalpha at 1,216 Angstroms by a minimum factor of 10(exp -4). The cost of this Lyman-alpha removal is that the filter is not fully transparent at the desired wavelengths of 900 to 1,100 Angstroms. Recently, in a project to improve the performance of optical and solar blind detectors, JPL developed a plasma process capable of removing carbon contamination from indium metal. In this work, a low-power, low-temperature hydrogen plasma reacts with the carbon contaminants in the indium to form methane, but leaves the indium metal surface undisturbed. This process was recently tested in a proof-of-concept experiment with a filter provided by the University of Colorado. This initial test on a test filter showed improvement in transmission from 7 to 9 percent near 900 with no process optimization applied. Further improvements in this performance were readily achieved to bring the total transmission to 12% with optimization to JPL's existing process.

  12. Calibrated Landsat ETM+ nonthermal-band image mosaics of Afghanistan

    USGS Publications Warehouse

    Davis, Philip A.

    2006-01-01

    In 2005, the U.S. Agency for International Development and the U.S. Trade and Development Agency contracted with the U.S. Geological Survey to perform assessments of the natural resources within Afghanistan. The assessments concentrate on the resources that are related to the economic development of that country. Therefore, assessments were initiated in oil and gas, coal, mineral resources, water resources, and earthquake hazards. All of these assessments require geologic, structural, and topographic information throughout the country at a finer scale and better accuracy than that provided by the existing maps, which were published in the 1970s by the Russians and Germans. The very rugged terrain in Afghanistan, the large scale of these assessments, and the terrorist threat in Afghanistan indicated that the best approach to provide the preliminary assessments was to use remotely sensed, satellite image data, although this may also apply to subsequent phases of the assessments. Therefore, the first step in the assessment process was to produce satellite image mosaics of Afghanistan that would be useful for these assessments. This report discusses the production and characteristics of the fundamental satellite image databases produced for these assessments, which are calibrated image mosaics of all six Landsat nonthermal (reflected) bands.

  13. The Spectral Image Processing System (SIPS): Software for integrated analysis of AVIRIS data

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.

    1992-01-01

    The Spectral Image Processing System (SIPS) is a software package developed by the Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, in response to a perceived need to provide integrated tools for analysis of imaging spectrometer data both spectrally and spatially. SIPS was specifically designed to deal with data from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and the High Resolution Imaging Spectrometer (HIRIS), but was tested with other datasets including the Geophysical and Environmental Research Imaging Spectrometer (GERIS), GEOSCAN images, and Landsat TM. SIPS was developed using the 'Interactive Data Language' (IDL). It takes advantage of high speed disk access and fast processors running under the UNIX operating system to provide rapid analysis of entire imaging spectrometer datasets. SIPS allows analysis of single or multiple imaging spectrometer data segments at full spatial and spectral resolution. It also allows visualization and interactive analysis of image cubes derived from quantitative analysis procedures such as absorption band characterization and spectral unmixing. SIPS consists of three modules: SIPS Utilities, SIPS_View, and SIPS Analysis. SIPS version 1.1 is described below.

  14. Assessing Mesoscale Volcanic Aviation Hazards using ASTER

    NASA Astrophysics Data System (ADS)

    Pieri, D.; Gubbels, T.; Hufford, G.; Olsson, P.; Realmuto, V.

    2006-12-01

    The Advanced Spaceborne Thermal Emission and Reflection (ASTER) imager onboard the NASA Terra Spacecraft is a joint project of the Japanese Ministry for Economy, Trade, and Industry (METI) and NASA. ASTER has acquired over one million multi-spectral 60km by 60 km images of the earth over the last six years. It consists of three sub-instruments: (a) a four channel VNIR (0.52-0.86um) imager with a spatial resolution of 15m/pixel, including three nadir-viewing bands (1N, 2N, 3N) and one repeated rear-viewing band (3B) for stereo-photogrammetric terrain reconstruction (8-12m vertical resolution); (b) a SWIR (1.6-2.43um) imager with six bands at 30m/pixel; and (c) a TIR (8.125-11.65um) instrument with five bands at 90m/pixel. Returned data are processed in Japan at the Earth Remote Sensing Data Analysis Center (ERSDAC) and at the Land Processes Distributed Active Archive Center (LP DAAC), located at the USGS Center for Earth Resource Observation and Science (EROS) in Sioux Falls, South Dakota. Within the ASTER Project, the JPL Volcano Data Acquisition and Analyses System (VDAAS) houses over 60,000 ASTER volcano images of 1542 volcanoes worldwide and will be accessible for downloads by the general public and on-line image analyses by researchers in early 2007. VDAAS multi-spectral thermal infrared (TIR) de-correlation stretch products are optimized for volcanic ash detection and have a spatial resolution of 90m/pixel. Digital elevation models (DEM) stereo-photogrammetrically derived from ASTER Band 3B/3N data are also available within VDAAS at 15 and 30m/pixel horizontal resolution. Thus, ASTER visible, IR, and DEM data at 15-100m/pixel resolution within VDAAS can be combined to provide useful boundary conditions on local volcanic eruption plume location, composition, and altitude, as well as on topography of underlying terrain. During and after eruptions, low- altitude winds and ash transport can be affected by topography, and other orographic thermal and water vapor transport effects from the micro (<1km) to mesoscale (1-100km). Such phenomena are thus well-observed by ASTER and pose transient and severe hazards to aircraft operating in and out of airports near volcanoes (e.g., Anchorage, AK, USA; Catania, Italy; Kagoshima City, Japan). ASTER image data and derived products provide boundary conditions for 3D mesoscale atmospheric transport and chemistry models (e.g., RAMS) for retrospective and prospective studies of volcanic aerosol transport at low altitudes in takeoff and landing corridors near active volcanoes. Putative ASTER direct downlinks in the future could provide real-time mitigation of such hazards. Some examples of mesoscale analyses for threatened airspace near US and non- US airports will be shown. This work was, in part, carried out at the Jet Propulsion Laboratory of the California Institute of Technology under contract to the NASA Earth Science Research Program and as part of ASTER Science Team activities.

  15. Development of low optical cross talk filters for VIIRS (JPSS)

    NASA Astrophysics Data System (ADS)

    Murgai, Vijay; Hendry, Derek; Downing, Kevin; Carbone, David; Potter, John

    2016-09-01

    The Visible/Infrared Imaging Radiometer Suite (VIIRS) is a key sensor on Suomi National Polar-orbiting Partnership (S-NPP) satellite launched on October 28, 2011 into a polar orbit of 824 km nominal altitude and the JPSS sensors currently being built and integrated. VIIRS collects radiometric and imagery data of the Earth's atmosphere, oceans, and land surfaces in 22 spectral bands spanning the visible and infrared spectrum from 0.4 to 12.5 μm. Interference filters assembled in `butcher-block' arrays mounted adjacent to focal plane arrays provide spectral definition. Out-of-band signal and out-of-band optical cross-talk was observed for bands in the 0.4 to 1 μm range in testing of VIIRS for S-NPP. Optical cross-talk is in-band or out-of-band light incident on an adjacent filter or adjacent region of the same filter reaching the detector. Out-of-band optical cross-talk results in spectral and spatial `impurities' in the signal and consequent errors in the calculated environmental parameters such as ocean color that rely on combinations of signals from more than one band. This paper presents results of characterization, specification, and coating process improvements that enabled production of filters with significantly reduced out of band light for Joint Polar Satellite System (JPSS) J1 and subsequent sensors. Total transmission and scatter measurements at a wavelength within the pass band can successfully characterize filter performance prior to dicing and assembling filters into butcher block assemblies. Coating and process development demonstrated performance on test samples followed by production of filters for J1 and J2. Results for J1 and J2 filters are presented.

  16. Modulations of eye movement patterns by spatial filtering during the learning and testing phases of an old/new face recognition task.

    PubMed

    Lemieux, Chantal L; Collin, Charles A; Nelson, Elizabeth A

    2015-02-01

    In two experiments, we examined the effects of varying the spatial frequency (SF) content of face images on eye movements during the learning and testing phases of an old/new recognition task. At both learning and testing, participants were presented with face stimuli band-pass filtered to 11 different SF bands, as well as an unfiltered baseline condition. We found that eye movements varied significantly as a function of SF. Specifically, the frequency of transitions between facial features showed a band-pass pattern, with more transitions for middle-band faces (≈5-20 cycles/face) than for low-band (≈<5 cpf) or high-band (≈>20 cpf) ones. These findings were similar for the learning and testing phases. The distributions of transitions across facial features were similar for the middle-band, high-band, and unfiltered faces, showing a concentration on the eyes and mouth; conversely, low-band faces elicited mostly transitions involving the nose and nasion. The eye movement patterns elicited by low, middle, and high bands are similar to those previous researchers have suggested reflect holistic, configural, and featural processing, respectively. More generally, our results are compatible with the hypotheses that eye movements are functional, and that the visual system makes flexible use of visuospatial information in face processing. Finally, our finding that only middle spatial frequencies yielded the same number and distribution of fixations as unfiltered faces adds more evidence to the idea that these frequencies are especially important for face recognition, and reveals a possible mediator for the superior performance that they elicit.

  17. Evaluation of Detector-to-Detector and Mirror Side Differences for Terra MODIS Reflective Solar Bands Using Simultaneous MISR Observations

    NASA Technical Reports Server (NTRS)

    Wu, Aisheng; Xiong, Xiaoxiong; Angal, A.; Barnes, W.

    2011-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) is one of the five Earth-observing instruments on-board the National Aeronautics and Space Administration (NASA) Earth-Observing System(EOS) Terra spacecraft, launched in December 1999. It has 36 spectral bands with wavelengths ranging from 0.41 to 14.4 mm and collects data at three nadir spatial resolutions: 0.25 km for 2 bands with 40 detectors each, 0.5 km for 5 bands with 20 detectors each and 1 km for the remaining 29 bands with 10 detectors each. MODIS bands are located on four separate focal plane assemblies (FPAs) according to their spectral wavelengths and aligned in the cross-track direction. Detectors of each spectral band are aligned in the along-track direction. MODIS makes observations using a two-sided paddle-wheel scan mirror. Its on-board calibrators (OBCs) for the reflective solar bands (RSBs) include a solar diffuser (SD), a solar diffuser stability monitor (SDSM) and a spectral-radiometric calibration assembly (SRCA). Calibration is performed for each band, detector, sub-sample (for sub-kilometer resolution bands) and mirror side. In this study, a ratio approach is applied to MODIS observed Earth scene reflectances to track the detector-to-detector and mirror side differences. Simultaneous observed reflectances from the Multi-angle Imaging Spectroradiometer (MISR), also onboard the Terra spacecraft, are used with MODIS observed reflectances in this ratio approach for four closely matched spectral bands. Results show that the detector-to-detector difference between two adjacent detectors within each spectral band is typically less than 0.2% and, depending on the wavelengths, the maximum difference among all detectors varies from 0.5% to 0.8%. The mirror side differences are found to be very small for all bands except for band 3 at 0.44 mm. This is the band with the shortest wavelength among the selected matching bands, showing a time-dependent increase for the mirror side difference. This study is part of the effort by the MODIS Characterization Support Team (MCST) in order to track the RSB on-orbit performance for MODIS collection 5 data products. To support MCST efforts for future data re-processing, this analysis will be extended to include more spectral bands and temporal coverage.

  18. Geometric flow control of shear bands by suppression of viscous sliding

    PubMed Central

    Viswanathan, Koushik; Mahato, Anirban; Sundaram, Narayan K.; M'Saoubi, Rachid; Trumble, Kevin P.; Chandrasekar, Srinivasan

    2016-01-01

    Shear banding is a plastic flow instability with highly undesirable consequences for metals processing. While band characteristics have been well studied, general methods to control shear bands are presently lacking. Here, we use high-speed imaging and micro-marker analysis of flow in cutting to reveal the common fundamental mechanism underlying shear banding in metals. The flow unfolds in two distinct phases: an initiation phase followed by a viscous sliding phase in which most of the straining occurs. We show that the second sliding phase is well described by a simple model of two identical fluids being sheared across their interface. The equivalent shear band viscosity computed by fitting the model to experimental displacement profiles is very close in value to typical liquid metal viscosities. The observation of similar displacement profiles across different metals shows that specific microstructure details do not affect the second phase. This also suggests that the principal role of the initiation phase is to generate a weak interface that is susceptible to localized deformation. Importantly, by constraining the sliding phase, we demonstrate a material-agnostic method—passive geometric flow control—that effects complete band suppression in systems which otherwise fail via shear banding. PMID:27616920

  19. Geometric flow control of shear bands by suppression of viscous sliding

    NASA Astrophysics Data System (ADS)

    Sagapuram, Dinakar; Viswanathan, Koushik; Mahato, Anirban; Sundaram, Narayan K.; M'Saoubi, Rachid; Trumble, Kevin P.; Chandrasekar, Srinivasan

    2016-08-01

    Shear banding is a plastic flow instability with highly undesirable consequences for metals processing. While band characteristics have been well studied, general methods to control shear bands are presently lacking. Here, we use high-speed imaging and micro-marker analysis of flow in cutting to reveal the common fundamental mechanism underlying shear banding in metals. The flow unfolds in two distinct phases: an initiation phase followed by a viscous sliding phase in which most of the straining occurs. We show that the second sliding phase is well described by a simple model of two identical fluids being sheared across their interface. The equivalent shear band viscosity computed by fitting the model to experimental displacement profiles is very close in value to typical liquid metal viscosities. The observation of similar displacement profiles across different metals shows that specific microstructure details do not affect the second phase. This also suggests that the principal role of the initiation phase is to generate a weak interface that is susceptible to localized deformation. Importantly, by constraining the sliding phase, we demonstrate a material-agnostic method-passive geometric flow control-that effects complete band suppression in systems which otherwise fail via shear banding.

  20. Hyperspectral image classification based on local binary patterns and PCANet

    NASA Astrophysics Data System (ADS)

    Yang, Huizhen; Gao, Feng; Dong, Junyu; Yang, Yang

    2018-04-01

    Hyperspectral image classification has been well acknowledged as one of the challenging tasks of hyperspectral data processing. In this paper, we propose a novel hyperspectral image classification framework based on local binary pattern (LBP) features and PCANet. In the proposed method, linear prediction error (LPE) is first employed to select a subset of informative bands, and LBP is utilized to extract texture features. Then, spectral and texture features are stacked into a high dimensional vectors. Next, the extracted features of a specified position are transformed to a 2-D image. The obtained images of all pixels are fed into PCANet for classification. Experimental results on real hyperspectral dataset demonstrate the effectiveness of the proposed method.

  1. Coherent X-Ray Imaging of Collagen Fibril Distributions within Intact Tendons

    PubMed Central

    Berenguer, Felisa; Bean, Richard J.; Bozec, Laurent; Vila-Comamala, Joan; Zhang, Fucai; Kewish, Cameron M.; Bunk, Oliver; Rodenburg, John M.; Robinson, Ian K.

    2014-01-01

    The characterization of the structure of highly hierarchical biosamples such as collagen-based tissues at the scale of tens of nanometers is essential to correlate the tissue structure with its growth processes. Coherent x-ray Bragg ptychography is an innovative imaging technique that gives high resolution images of the ordered parts of such samples. Herein, we report how we used this method to image the collagen fibrillar ultrastructure of intact rat tail tendons. The images show ordered fibrils extending over 10–20 μm in length, with a quantifiable D-banding spacing variation of 0.2%. Occasional defects in the fibrils distribution have also been observed, likely indicating fibrillar fusion events. PMID:24461021

  2. Research on Synthetic Aperture Radar Processing for the Spaceborne Sliding Spotlight Mode.

    PubMed

    Shen, Shijian; Nie, Xin; Zhang, Xinggan

    2018-02-03

    Gaofen-3 (GF-3) is China' first C-band multi-polarization synthetic aperture radar (SAR) satellite, which also provides the sliding spotlight mode for the first time. Sliding-spotlight mode is a novel mode to realize imaging with not only high resolution, but also wide swath. Several key technologies for sliding spotlight mode in spaceborne SAR with high resolution are investigated in this paper, mainly including the imaging parameters, the methods of velocity estimation and ambiguity elimination, and the imaging algorithms. Based on the chosen Convolution BackProjection (CBP) and PFA (Polar Format Algorithm) imaging algorithms, a fast implementation method of CBP and a modified PFA method suitable for sliding spotlight mode are proposed, and the processing flows are derived in detail. Finally, the algorithms are validated by simulations and measured data.

  3. Jupiter's Bands of Clouds

    NASA Image and Video Library

    2017-06-22

    This enhanced-color image of Jupiter's bands of light and dark clouds was created by citizen scientists Gerald Eichstädt and Seán Doran using data from the JunoCam imager on NASA's Juno spacecraft. Three of the white oval storms known as the "String of Pearls" are visible near the top of the image. Each of the alternating light and dark atmospheric bands in this image is wider than Earth, and each rages around Jupiter at hundreds of miles (kilometers) per hour. The lighter areas are regions where gas is rising, and the darker bands are regions where gas is sinking. Juno acquired the image on May 19, 2017, at 11:30 a.m. PST (2:30 p.m. EST) from an altitude of about 20,800 miles (33,400 kilometers) above Jupiter's cloud tops. https://photojournal.jpl.nasa.gov/catalog/PIA21393

  4. Eyecup scope—optical recordings of light stimulus-evoked fluorescence signals in the retina

    PubMed Central

    Hausselt, Susanne E.; Breuninger, Tobias; Castell, Xavier; Denk, Winfried; Margolis, David J.; Detwiler, Peter B.

    2009-01-01

    Dendritic signals play an essential role in processing visual information in the retina. To study them in neurites too small for electrical recording, we developed an instrument that combines a multi-photon (MP) microscope with a through-the-objective high-resolution visual stimulator. An upright microscope was designed that uses the objective lens for both MP imaging and delivery of visual stimuli to functionally intact retinal explants or eyecup preparations. The stimulator consists of a miniature liquid-crystal-on-silicon display coupled into the optical path of an infrared-excitation laser-scanning microscope. A pair of custom-made dichroic filters allows light from the excitation laser and three spectral bands (‘colors’) from the stimulator to reach the retina, leaving two intermediate bands for fluorescence imaging. Special optics allow displacement of the stimulator focus relative to the imaging focus. Spatially resolved changes in calcium-indicator fluorescence in response to visual stimuli were recorded in dendrites of different types of mammalian retinal neurons. PMID:19023590

  5. Low Latency DESDynI Data Products for Disaster Response, Resource Management and Other Applications

    NASA Technical Reports Server (NTRS)

    Doubleday, Joshua R.; Chien, Steve A.; Lou, Yunling

    2011-01-01

    We are developing onboard processor technology targeted at the L-band SAR instrument onboard the planned DESDynI mission to enable formation of SAR images onboard opening possibilities for near-real-time data products to augment full data streams. Several image processing and/or interpretation techniques are being explored as possible direct-broadcast products for use by agencies in need of low-latency data, responsible for disaster mitigation and assessment, resource management, agricultural development, shipping, etc. Data collected through UAVSAR (L-band) serves as surrogate to the future DESDynI instrument. We have explored surface water extent as a tool for flooding response, and disturbance images on polarimetric backscatter of repeat pass imagery potentially useful for structural collapse (earthquake), mud/land/debris-slides etc. We have also explored building vegetation and snow/ice classifiers, via support vector machines utilizing quad-pol backscatter, cross-pol phase, and a number of derivatives (radar vegetation index, dielectric estimates, etc.). We share our qualitative and quantitative results thus far.

  6. Fifteen Years of ASTER Data on NASA's Terra Platform

    NASA Astrophysics Data System (ADS)

    Abrams, M.; Tsu, H.

    2014-12-01

    The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is one of five instruments operating on NASA's Terra platform. Launched in 1999, ASTER has been acquiring data for 15 years. ASTER is a joint project between Japan's Ministry of Economy, Trade and Industry; and US NASA. Data processing and distribution are done by both organizations; a joint science team helps to define mission priorities. ASTER acquires ~550 images per day, with a 60 km swath width. A daytime acquisition is three visible bands and a backward-looking stereo band with 15 m resolution, six SWIR bands with 30 m resolution, and 5 TIR bands with 90 m resolution. Nighttime TIR-only data are routinely collected. The stereo capability has allowed the ASTER project to produce a global Digital Elevation Model (GDEM) data set, covering the earth's land surfaces from 83 degrees north to 83 degrees south, with 30 m data postings. This is the only (near-) global DEM available to all users at no charge; to date, over 28 million 1-by-1 degree DEM tiles have been distributed. As a general-purpose imaging instrument, ASTER-acquired data are used in numerous scientific disciplines, including: land use/land cover, urban monitoring, urban heat island studies, wetlands studies, agriculture monitoring, forestry, etc. Of particular emphasis has been the acquisition and analysis of data for natural hazard and disaster applications. We have been systematically acquiring images for 15,000 valley glaciers through the USGS Global Land Ice Monitoring from Space Project. The recently published Randolph Glacier Inventory, and the GLIMS book, both relied heavily on ASTER data as the basis for glaciological and climatological studies. The ASTER Volcano Archive is a unique on-line archive of thousands of daytime and nighttime ASTER images of ~1500 active glaciers, along with a growing archive of Landsat images. ASTER was scheduled to target active volcanoes at least 4 times per year, and more frequently for select volcanoes (like Mt. Etna and Hawaii). A separate processing and distribution system is operational in the US to allow rapid scheduling, acquisition, and distribution of ASTER data for natural hazards and disasters, such as forest fires, tornadoes, tsunamis, earthquakes, and floods. We work closely with other government agencies to provide this service.

  7. Image quality measures to assess hyperspectral compression techniques

    NASA Astrophysics Data System (ADS)

    Lurie, Joan B.; Evans, Bruce W.; Ringer, Brian; Yeates, Mathew

    1994-12-01

    The term 'multispectral' is used to describe imagery with anywhere from three to about 20 bands of data. The images acquired by Landsat and similar earth sensing satellites including the French Spot platform are typical examples of multispectral data sets. Applications range from crop observation and yield estimation, to forestry, to sensing of the environment. The wave bands typically range from the visible to thermal infrared and are fractions of a micron wide. They may or may not be contiguous. Thus each pixel will have several spectral intensities associated with it but detailed spectra are not obtained. The term 'hyperspectral' is typically used for spectral data encompassing hundreds of samples of a spectrum. Hyperspectral, electro-optical sensors typically operate in the visible and near infrared bands. Their characteristic property is the ability to resolve a large number (typically hundreds) of contiguous spectral bands, thus producing a detailed profile of the electromagnetic spectrum. Like multispectral sensors, recently developed hyperspectral sensors are often also imaging sensors, measuring spectral over a two dimensional spatial array of picture elements of pixels. The resulting data is thus inherently three dimensional - an array of samples in which two dimensions correspond to spatial position and the third to wavelength. The data sets, commonly referred to as image cubes or datacubes (although technically they are often rectangular solids), are very rich in information but quickly become unwieldy in size, generating formidable torrents of data. Both spaceborne and airborne hyperspectral cameras exist and are in use today. The data is unique in its ability to provide high spatial and spectral resolution simultaneously, and shows great promise in both military and civilian applications. A data analysis system has been built at TRW under a series of Internal Research and Development projects. This development has been prompted by the business opportunities, by the series of instruments built here and by the availability of data from other instruments. The products of the processing system has been used to process data produced by TRW sensors and other instruments. Figure 1 provides an overview of the TRW hyperspectral collection, data handling and exploitation capability. The Analysis and Exploitation functions deal with the digitized image cubes. The analysis system was designed to handle various types of data but the emphasis was on the data acquired by the TRW instruments.

  8. The ASTRODEEP Frontier Fields catalogues. I. Multiwavelength photometry of Abell-2744 and MACS-J0416

    NASA Astrophysics Data System (ADS)

    Merlin, E.; Amorín, R.; Castellano, M.; Fontana, A.; Buitrago, F.; Dunlop, J. S.; Elbaz, D.; Boucaud, A.; Bourne, N.; Boutsia, K.; Brammer, G.; Bruce, V. A.; Capak, P.; Cappelluti, N.; Ciesla, L.; Comastri, A.; Cullen, F.; Derriere, S.; Faber, S. M.; Ferguson, H. C.; Giallongo, E.; Grazian, A.; Lotz, J.; Michałowski, M. J.; Paris, D.; Pentericci, L.; Pilo, S.; Santini, P.; Schreiber, C.; Shu, X.; Wang, T.

    2016-05-01

    Context. The Frontier Fields survey is a pioneering observational program aimed at collecting photometric data, both from space (Hubble Space Telescope and Spitzer Space Telescope) and from ground-based facilities (VLT Hawk-I), for six deep fields pointing at clusters of galaxies and six nearby deep parallel fields, in a wide range of passbands. The analysis of these data is a natural outcome of the Astrodeep project, an EU collaboration aimed at developing methods and tools for extragalactic photometry and creating valuable public photometric catalogues. Aims: We produce multiwavelength photometric catalogues (from B to 4.5 μm) for the first two of the Frontier Fields, Abell-2744 and MACS-J0416 (plus their parallel fields). Methods: To detect faint sources even in the central regions of the clusters, we develop a robust and repeatable procedure that uses the public codes Galapagos and Galfit to model and remove most of the light contribution from both the brightest cluster members, and the intra-cluster light. We perform the detection on the processed HST H160 image to obtain a pure H-selected sample, which is the primary catalogue that we publish. We also add a sample of sources which are undetected in the H160 image but appear on a stacked infrared image. Photometry on the other HST bands is obtained using SExtractor, again on processed images after the procedure for foreground light removal. Photometry on the Hawk-I and IRAC bands is obtained using our PSF-matching deconfusion code t-phot. A similar procedure, but without the need for the foreground light removal, is adopted for the Parallel fields. Results: The procedure of foreground light subtraction allows for the detection and the photometric measurements of ~2500 sources per field. We deliver and release complete photometric H-detected catalogues, with the addition of the complementary sample of infrared-detected sources. All objects have multiwavelength coverage including B to H HST bands, plus K-band from Hawk-I, and 3.6-4.5 μm from Spitzer. full and detailed treatment of photometric errors is included. We perform basic sanity checks on the reliability of our results. Conclusions: The multiwavelength photometric catalogues are available publicly and are ready to be used for scientific purposes. Our procedures allows for the detection of outshone objects near the bright galaxies, which, coupled with the magnification effect of the clusters, can reveal extremely faint high redshift sources. Full analysis on photometric redshifts is presented in Paper II. The catalogues, together with the final processed images for all HST bands (as well as some diagnostic data and images), are publicly available and can be downloaded from the Astrodeep website at http://www.astrodeep.eu/frontier-fields/ and from a dedicated CDS webpage (http://astrodeep.u-strasbg.fr/ff/index.html). The catalogues are also available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/590/A31

  9. TESTS OF LOW-FREQUENCY GEOMETRIC DISTORTIONS IN LANDSAT 4 IMAGES.

    USGS Publications Warehouse

    Batson, R.M.; Borgeson, W.T.; ,

    1985-01-01

    Tests were performed to investigate the geometric characteristics of Landsat 4 images. The first set of tests was designed to determine the extent of image distortion caused by the physical process of writing the Landsat 4 images on film. The second was designed to characterize the geometric accuracies inherent in the digital images themselves. Test materials consisted of film images of test targets generated by the Laser Beam Recorders at Sioux Falls, the Optronics* Photowrite film writer at Goddard Space Flight Center, and digital image files of a strip 600 lines deep across the full width of band 5 of the Washington, D. C. Thematic Mapper scene. The tests were made by least-squares adjustment of an array of measured image points to a corresponding array of control points.

  10. TM digital image products for applications. [computer compatible tapes

    NASA Technical Reports Server (NTRS)

    Barker, J. L.; Gunther, F. J.; Abrams, R. B.; Ball, D.

    1984-01-01

    The image characteristics of digital data generated by LANDSAT 4 thematic mapper (TM) are discussed. Digital data from the TM resides in tape files at various stages of image processing. Within each image data file, the image lines are blocked by a factor of either 5 for a computer compatible tape CCT-BT, or 4 for a CCT-AT and CCT-PT; in each format, the image file has a different format. Nominal geometric corrections which provide proper geodetic relationships between different parts of the image are available only for the CCT-PT. It is concluded that detector 3 of band 5 on the TM does not respond; this channel of data needs replacement. The empty bin phenomenon in CCT-AT images results from integer truncations of mixed-mode arithmetric operations.

  11. Research on hyperspectral dynamic scene and image sequence simulation

    NASA Astrophysics Data System (ADS)

    Sun, Dandan; Liu, Fang; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei

    2016-10-01

    This paper presents a simulation method of hyperspectral dynamic scene and image sequence for hyperspectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyperspectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyperspectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyperspectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyperspectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyperspectral images are consistent with the theoretical analysis results.

  12. VIP: Vortex Image Processing Package for High-contrast Direct Imaging

    NASA Astrophysics Data System (ADS)

    Gomez Gonzalez, Carlos Alberto; Wertz, Olivier; Absil, Olivier; Christiaens, Valentin; Defrère, Denis; Mawet, Dimitri; Milli, Julien; Absil, Pierre-Antoine; Van Droogenbroeck, Marc; Cantalloube, Faustine; Hinz, Philip M.; Skemer, Andrew J.; Karlsson, Mikael; Surdej, Jean

    2017-07-01

    We present the Vortex Image Processing (VIP) library, a python package dedicated to astronomical high-contrast imaging. Our package relies on the extensive python stack of scientific libraries and aims to provide a flexible framework for high-contrast data and image processing. In this paper, we describe the capabilities of VIP related to processing image sequences acquired using the angular differential imaging (ADI) observing technique. VIP implements functionalities for building high-contrast data processing pipelines, encompassing pre- and post-processing algorithms, potential source position and flux estimation, and sensitivity curve generation. Among the reference point-spread function subtraction techniques for ADI post-processing, VIP includes several flavors of principal component analysis (PCA) based algorithms, such as annular PCA and incremental PCA algorithms capable of processing big datacubes (of several gigabytes) on a computer with limited memory. Also, we present a novel ADI algorithm based on non-negative matrix factorization, which comes from the same family of low-rank matrix approximations as PCA and provides fairly similar results. We showcase the ADI capabilities of the VIP library using a deep sequence on HR 8799 taken with the LBTI/LMIRCam and its recently commissioned L-band vortex coronagraph. Using VIP, we investigated the presence of additional companions around HR 8799 and did not find any significant additional point source beyond the four known planets. VIP is available at http://github.com/vortex-exoplanet/VIP and is accompanied with Jupyter notebook tutorials illustrating the main functionalities of the library.

  13. ASTER Images the Island of Hawaii

    NASA Image and Video Library

    2000-04-26

    These images of the Island of Hawaii were acquired on March 19, 2000 by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on NASA's Terra satellite. With its 14 spectral bands from the visible to the thermal infrared wavelength region, and its high spatial resolution of 15 to 90 meters (about 50 to 300 feet), ASTER will image Earth for the next 6 years to map and monitor the changing surface of our planet. Data are shown from the short wavelength and thermal infrared spectral regions, illustrating how different and complementary information is contained in different parts of the spectrum. Left image: This false-color image covers an area 60 kilometers (37 miles) wide and 120 kilometers (75 miles) long in three bands of the short wavelength infrared region. While, much of the island was covered in clouds, the dominant central Mauna Loa volcano, rising to an altitude of 4115 meters (13,500 feet), is cloud-free. Lava flows can be seen radiating from the central crater in green and black tones. As they reach lower elevations, the flows become covered with vegetation, and their image color changes to yellow and orange. Mauna Kea volcano to the north of Mauna Loa has a thin cloud-cover, producing a bluish tone on the image. The ocean in the lower right appears brown due to the color processing. Right image: This image is a false-color composite of three thermal infrared bands. The brightness of the colors is proportional to the temperature, and the hues display differences in rock composition. Clouds are black, because they are the coldest objects in the scene. The ocean and thick vegetation appear dark green because they are colder than bare rock surfaces, and have no thermal spectral features. Lava flows are shades of magenta, green, pink and yellow, reflecting chemical changes due to weathering and relative age differences. http://photojournal.jpl.nasa.gov/catalog/PIA02604

  14. Space Radar Image of Chernobyl

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is an image of the Chernobyl nuclear power plant and its surroundings, centered at 51.17 north latitude and 30.15 west longitude. The image was acquired by the Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar aboard the space shuttle Endeavour on its 16th orbit on October 1, 1994. The area is located on the northern border of the Ukraine Republic and was produced by using the L-band (horizontally transmitted and received) polarization. The differences in the intensity are due to differences in vegetation cover, with brighter areas being indicative of more vegetation. These data were acquired as part of a collaboration between NASA and the National Space Agency of Ukraine in Remote Sensing and Earth Sciences. NASA has included several sites provided by the Ukrainian space agency as targets of opportunity during the second flight of SIR-C/X-SAR. The Ukrainian space agency also plans to conduct airborne surveys of these sites during the mission. The Chernobyl nuclear power plant is located toward the top of the image near the Pripyat River. The 12-kilometer (7.44-mile)-long cooling pond is easily distinguishable as an elongated dark shape in the center near the top of the image. The reactor complex is visible as the bright area to the extreme left of the cooling pond and the city of Chernobyl is the bright area just below the cooling pond next to the Pripyat River. The large dark area in the bottom right of the image is the Kiev Reservoir just north of Kiev. Also visible is the Dnieper River, which feeds into the Kiev Reservoir from the top of the image. The Soviet government evacuated 116,000 people within 30 kilometers (18.6 miles) of the Chernobyl reactor after the explosion and fire on April 26, 1986. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.v.(DLR), the major partner in science, operations and data processing of X-SAR.

  15. The advanced linked extended reconnaissance and targeting technology demonstration project

    NASA Astrophysics Data System (ADS)

    Cruickshank, James; de Villers, Yves; Maheux, Jean; Edwards, Mark; Gains, David; Rea, Terry; Banbury, Simon; Gauthier, Michelle

    2007-06-01

    The Advanced Linked Extended Reconnaissance & Targeting (ALERT) Technology Demonstration (TD) project is addressing key operational needs of the future Canadian Army's Surveillance and Reconnaissance forces by fusing multi-sensor and tactical data, developing automated processes, and integrating beyond line-of-sight sensing. We discuss concepts for displaying and fusing multi-sensor and tactical data within an Enhanced Operator Control Station (EOCS). The sensor data can originate from the Coyote's own visible-band and IR cameras, laser rangefinder, and ground-surveillance radar, as well as beyond line-of-sight systems such as a mini-UAV and unattended ground sensors. The authors address technical issues associated with the use of fully digital IR and day video cameras and discuss video-rate image processing developed to assist the operator to recognize poorly visible targets. Automatic target detection and recognition algorithms processing both IR and visible-band images have been investigated to draw the operator's attention to possible targets. The machine generated information display requirements are presented with the human factors engineering aspects of the user interface in this complex environment, with a view to establishing user trust in the automation. The paper concludes with a summary of achievements to date and steps to project completion.

  16. Noise-Coupled Image Rejection Architecture of Complex Bandpass ΔΣAD Modulator

    NASA Astrophysics Data System (ADS)

    San, Hao; Kobayashi, Haruo

    This paper proposes a new realization technique of image rejection function by noise-coupling architecture, which is used for a complex bandpass ΔΣAD modulator. The complex bandpass ΔΣAD modulator processes just input I and Q signals, not image signals, and the AD conversion can be realized with low power dissipation. It realizes an asymmetric noise-shaped spectra, which is desirable for such low-IF receiver applications. However, the performance of the complex bandpass ΔΣAD modulator suffers from the mismatch between internal analog I and Q paths. I/Q path mismatch causes an image signal, and the quantization noise of the mirror image band aliases into the desired signal band, which degrades the SQNDR (Signal to Quantization Noise and Distortion Ratio) of the modulator. In our proposed modulator architecture, an extra notch for image rejection is realized by noise-coupled topology. We just add some passive capacitors and switches to the modulator; the additional integrator circuit composed of an operational amplifier in the conventional image rejection realization is not necessary. Therefore, the performance of the complex modulator can be effectively raised without additional power dissipation. We have performed simulation with MATLAB to confirm the validity of the proposed architecture. The simulation results show that the proposed architecture can achieve the realization of image-rejection effectively, and improve the SQNDR of the complex bandpass ΔΣAD modulator.

  17. Application of Remote Sensing in Geological Mapping, Case Study al Maghrabah Area - Hajjah Region, Yemen

    NASA Astrophysics Data System (ADS)

    Al-Nahmi, F.; Saddiqi, O.; Hilali, A.; Rhinane, H.; Baidder, L.; El arabi, H.; Khanbari, K.

    2017-11-01

    Remote sensing technology plays an important role today in the geological survey, mapping, analysis and interpretation, which provides a unique opportunity to investigate the geological characteristics of the remote areas of the earth's surface without the need to gain access to an area on the ground. The aim of this study is achievement a geological map of the study area. The data utilizes is Sentinel-2 imagery, the processes used in this study, the OIF Optimum Index Factor is a statistic value that can be used to select the optimum combination of three bands in a satellite image. It's based on the total variance within bands and correlation coefficient between bands, ICA Independent component analysis (3, 4, 6) is a statistical and computational technique for revealing hidden factors that underlie sets of random variables, measurements, or signals, MNF Minimum Noise Fraction (1, 2, 3) is used to determine the inherent dimensionality of image data to segregate noise in the data and to reduce the computational requirements for subsequent processing, Optimum Index Factor is a good method for choosing the best band for lithological mapping. ICA, MNF, also a practical way to extract the structural geology maps. The results in this paper indicate that, the studied area can be divided into four main geological units: Basement rocks (Meta volcanic, Meta sediments), Sedimentary rocks, Intrusive rocks, volcanic rocks. The method used in this study offers great potential for lithological mapping, by using Sentinel-2 imagery, the results were compared with existing geologic maps and were superior and could be used to update the existing maps.

  18. Analysis of the Electronic Crosstalk Effect in Terra MODIS Long-Wave Infrared Photovoltaic Bands Using Lunar Images

    NASA Technical Reports Server (NTRS)

    Wilson, Truman; Wu, Aisheng; Wang, Zhipeng; Xiong, Xiaoxiong

    2016-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) is one of the key sensors among the suite of remote sensing instruments on board the Earth Observing System Terra and Aqua spacecrafts. For each MODIS spectral band, the sensor degradation has been measured using a set of on-board calibrators. MODIS also uses lunar observations from nearly monthly spacecraft maneuvers, which bring the Moon into view through the space-view port, helping to characterize the scan mirror degradation at a different angles of incidence. Throughout the Terra mission, contamination of the long-wave infrared photovoltaic band (LWIR PV, bands 27-30) signals has been observed in the form of electronic crosstalk, where signal from each of the detectors among the LWIR PV bands can leak to the other detectors, producing a false signal contribution. This contamination has had a noticeable effect on the MODIS science products since 2010 for band 27, and since 2012 for bands 28 and 29. Images of the Moon have been used effectively for determining the contaminating bands, and have also been used to derive correction coefficients for the crosstalk contamination. In this paper, we introduce an updated technique for characterizing the crosstalk contamination among the LWIR PV bands using data from lunar calibration events. This approach takes into account both the in-band and out-of-band contribution to the signal contamination for each detector in bands 27-30, which is not considered in previous works. The crosstalk coefficients can be derived for each lunar calibration event, providing the time dependence of the crosstalk contamination. Application of these coefficients to Earth-view image data results in a significant reduction in image contamination and a correction of the scene radiance for bands 27- 30. Also, this correction shows a significant improvement to certain threshold tests in the MODIS Level-2 Cloud Mask. In this paper, we will detail the methodology used to identify and correct the crosstalk contamination for the LWIR PV bands in Terra MODIS. The derived time-dependent crosstalk coefficients will also be discussed. Finally, the impact of the correction on the downstream data products will be analyzed.

  19. Evaluating the effect of remote sensing image spatial resolution on soil exchangeable potassium prediction models in smallholder farm settings.

    PubMed

    Xu, Yiming; Smith, Scot E; Grunwald, Sabine; Abd-Elrahman, Amr; Wani, Suhas P

    2017-09-15

    Major end users of Digital Soil Mapping (DSM) such as policy makers and agricultural extension workers are faced with choosing the appropriate remote sensing data. The objective of this research is to analyze the spatial resolution effects of different remote sensing images on soil prediction models in two smallholder farms in Southern India called Kothapally (Telangana State), and Masuti (Karnataka State), and provide empirical guidelines to choose the appropriate remote sensing images in DSM. Bayesian kriging (BK) was utilized to characterize the spatial pattern of exchangeable potassium (K ex ) in the topsoil (0-15 cm) at different spatial resolutions by incorporating spectral indices from Landsat 8 (30 m), RapidEye (5 m), and WorldView-2/GeoEye-1/Pleiades-1A images (2 m). Some spectral indices such as band reflectances, band ratios, Crust Index and Atmospherically Resistant Vegetation Index from multiple images showed relatively strong correlations with soil K ex in two study areas. The research also suggested that fine spatial resolution WorldView-2/GeoEye-1/Pleiades-1A-based and RapidEye-based soil prediction models would not necessarily have higher prediction performance than coarse spatial resolution Landsat 8-based soil prediction models. The end users of DSM in smallholder farm settings need select the appropriate spectral indices and consider different factors such as the spatial resolution, band width, spectral resolution, temporal frequency, cost, and processing time of different remote sensing images. Overall, remote sensing-based Digital Soil Mapping has potential to be promoted to smallholder farm settings all over the world and help smallholder farmers implement sustainable and field-specific soil nutrient management scheme. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Space Radar Image of Karisoke & Virunga Volcanoes

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a false-color composite of Central Africa, showing the Virunga volcano chain along the borders of Rwanda, Zaire and Uganda. This area is home to the endangered mountain gorillas. The image was acquired on October 3, 1994, on orbit 58 of the space shuttle Endeavour by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR). In this image red is the L-band (horizontally transmitted, vertically received) polarization; green is the C-band (horizontally transmitted and received) polarization; and blue is the C-band (horizontally transmitted and received) polarization. The area is centered at about 2.4 degrees south latitude and 30.8 degrees east longitude. The image covers an area 56 kilometers by 70 kilometers (35 miles by 43 miles). The dark area at the top of the image is Lake Kivu, which forms the border between Zaire (to the right) and Rwanda (to the left). In the center of the image is the steep cone of Nyiragongo volcano, rising 3,465 meters (11,369 feet) high, with its central crater now occupied by a lava lake. To the left are three volcanoes, Mount Karisimbi, rising 4,500 meters (14,800 feet) high; Mount Sabinyo, rising 3,600 meters (12,000 feet) high; and Mount Muhavura, rising 4,100 meters (13,500 feet) high. To their right is Nyamuragira volcano, which is 3,053 meters (10,017 feet) tall, with radiating lava flows dating from the 1950s to the late 1980s. These active volcanoes constitute a hazard to the towns of Goma, Zaire and the nearby Rwandan refugee camps, located on the shore of Lake Kivu at the top left. This radar image highlights subtle differences in the vegetation of the region. The green patch to the center left of the image in the foothills of Karisimbi is a bamboo forest where the mountain gorillas live. The vegetation types in this area are an important factor in the habitat of mountain gorillas. Researchers at Rutgers University in New Jersey and the Dian Fossey Gorilla Fund in London will use this data to produce vegetation maps of the area to aid in their studies of the last 650 mountain gorillas in the world. The faint lines above the bamboo forest are the result of agricultural terracing by the people who live in the region. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.V. (DLR), the major partner in science, operations and data processing of X-SAR.

Top