Quality evaluation of pansharpened hyperspectral images generated using multispectral images
NASA Astrophysics Data System (ADS)
Matsuoka, Masayuki; Yoshioka, Hiroki
2012-11-01
Hyperspectral remote sensing can provide a smooth spectral curve of a target by using a set of higher spectral resolution detectors. The spatial resolution of the hyperspectral images, however, is generally much lower than that of multispectral images due to the lower energy of incident radiation. Pansharpening is an image-fusion technique that generates higher spatial resolution multispectral images by combining lower resolution multispectral images with higher resolution panchromatic images. In this study, higher resolution hyperspectral images were generated by pansharpening of simulated lower hyperspectral and higher multispectral data. Spectral and spatial qualities of pansharpened images, then, were accessed in relation to the spectral bands of multispectral images. Airborne hyperspectral data of AVIRIS was used in this study, and it was pansharpened using six methods. Quantitative evaluations of pansharpened image are achieved using two frequently used indices, ERGAS, and the Q index.
Reproducible high-resolution multispectral image acquisition in dermatology
NASA Astrophysics Data System (ADS)
Duliu, Alexandru; Gardiazabal, José; Lasser, Tobias; Navab, Nassir
2015-07-01
Multispectral image acquisitions are increasingly popular in dermatology, due to their improved spectral resolution which enables better tissue discrimination. Most applications however focus on restricted regions of interest, imaging only small lesions. In this work we present and discuss an imaging framework for high-resolution multispectral imaging on large regions of interest.
Feasibility study and quality assessment of unmanned aircraft system-derived multispectral images
NASA Astrophysics Data System (ADS)
Chang, Kuo-Jen
2017-04-01
The purpose of study is to explore the precision and the applicability of UAS-derived multispectral images. In this study, the Micro-MCA6 multispectral camera was mounted on quadcopter. The Micro-MCA6 shoot images synchronized of each single band. By means of geotagged images and control points, the orthomosaic images of each single band generated firstly by 14cm resolution. The multispectral image was merged complete with 6 bands. In order to improve the spatial resolution, the 6 band image fused with 9cm resolution image taken from RGB camera. Quality evaluation of the image is verified of the each single band by using control points and check points. The standard deviations of errors are within 1 to 2 pixel resolution of each band. The quality of the multispectral image is compared with 3 cm resolution orthomosaic RGB image gathered from UAV in the same mission, as well. The standard deviations of errors are within 2 to 3 pixel resolution. The result shows that the errors resulting from the blurry and the band dislocation of the objects edge identification. To the end, the normalized difference vegetation index (NDVI) extracted from the image to explore the condition of vegetation and the nature of the environment. This study demonstrates the feasibility and the capability of the high resolution multispectral images.
The fusion of satellite and UAV data: simulation of high spatial resolution band
NASA Astrophysics Data System (ADS)
Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata
2017-10-01
Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.
Resolution Enhancement of Hyperion Hyperspectral Data using Ikonos Multispectral Data
2007-09-01
spatial - resolution hyperspectral image to produce a sharpened product. The result is a product that has the spectral properties of the ...multispectral sensors. In this work, we examine the benefits of combining data from high- spatial - resolution , low- spectral - resolution spectral imaging...sensors with data obtained from high- spectral - resolution , low- spatial - resolution spectral imaging sensors.
Effects of spatial resolution ratio in image fusion
Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.
2008-01-01
In image fusion, the spatial resolution ratio can be defined as the ratio between the spatial resolution of the high-resolution panchromatic image and that of the low-resolution multispectral image. This paper attempts to assess the effects of the spatial resolution ratio of the input images on the quality of the fused image. Experimental results indicate that a spatial resolution ratio of 1:10 or higher is desired for optimal multisensor image fusion provided the input panchromatic image is not downsampled to a coarser resolution. Due to the synthetic pixels generated from resampling, the quality of the fused image decreases as the spatial resolution ratio decreases (e.g. from 1:10 to 1:30). However, even with a spatial resolution ratio as small as 1:30, the quality of the fused image is still better than the original multispectral image alone for feature interpretation. In cases where the spatial resolution ratio is too small (e.g. 1:30), to obtain better spectral integrity of the fused image, one may downsample the input high-resolution panchromatic image to a slightly lower resolution before fusing it with the multispectral image.
NASA Astrophysics Data System (ADS)
Wu, Yu; Zheng, Lijuan; Xie, Donghai; Zhong, Ruofei
2017-07-01
In this study, the extended morphological attribute profiles (EAPs) and independent component analysis (ICA) were combined for feature extraction of high-resolution multispectral satellite remote sensing images and the regularized least squares (RLS) approach with the radial basis function (RBF) kernel was further applied for the classification. Based on the major two independent components, the geometrical features were extracted using the EAPs method. In this study, three morphological attributes were calculated and extracted for each independent component, including area, standard deviation, and moment of inertia. The extracted geometrical features classified results using RLS approach and the commonly used LIB-SVM library of support vector machines method. The Worldview-3 and Chinese GF-2 multispectral images were tested, and the results showed that the features extracted by EAPs and ICA can effectively improve the accuracy of the high-resolution multispectral image classification, 2% larger than EAPs and principal component analysis (PCA) method, and 6% larger than APs and original high-resolution multispectral data. Moreover, it is also suggested that both the GURLS and LIB-SVM libraries are well suited for the multispectral remote sensing image classification. The GURLS library is easy to be used with automatic parameter selection but its computation time may be larger than the LIB-SVM library. This study would be helpful for the classification application of high-resolution multispectral satellite remote sensing images.
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-01-01
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-05-22
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.
Landsat 8 Multispectral and Pansharpened Imagery Processing on the Study of Civil Engineering Issues
NASA Astrophysics Data System (ADS)
Lazaridou, M. A.; Karagianni, A. Ch.
2016-06-01
Scientific and professional interests of civil engineering mainly include structures, hydraulics, geotechnical engineering, environment, and transportation issues. Topics included in the context of the above may concern urban environment issues, urban planning, hydrological modelling, study of hazards and road construction. Land cover information contributes significantly on the study of the above subjects. Land cover information can be acquired effectively by visual image interpretation of satellite imagery or after applying enhancement routines and also by imagery classification. The Landsat Data Continuity Mission (LDCM - Landsat 8) is the latest satellite in Landsat series, launched in February 2013. Landsat 8 medium spatial resolution multispectral imagery presents particular interest in extracting land cover, because of the fine spectral resolution, the radiometric quantization of 12bits, the capability of merging the high resolution panchromatic band of 15 meters with multispectral imagery of 30 meters as well as the policy of free data. In this paper, Landsat 8 multispectral and panchromatic imageries are being used, concerning surroundings of a lake in north-western Greece. Land cover information is extracted, using suitable digital image processing software. The rich spectral context of the multispectral image is combined with the high spatial resolution of the panchromatic image, applying image fusion - pansharpening, facilitating in this way visual image interpretation to delineate land cover. Further processing concerns supervised image classification. The classification of pansharpened image preceded multispectral image classification. Corresponding comparative considerations are also presented.
Enhancing Spatial Resolution of Remotely Sensed Imagery Using Deep Learning
NASA Astrophysics Data System (ADS)
Beck, J. M.; Bridges, S.; Collins, C.; Rushing, J.; Graves, S. J.
2017-12-01
Researchers at the Information Technology and Systems Center at the University of Alabama in Huntsville are using Deep Learning with Convolutional Neural Networks (CNNs) to develop a method for enhancing the spatial resolutions of moderate resolution (10-60m) multispectral satellite imagery. This enhancement will effectively match the resolutions of imagery from multiple sensors to provide increased global temporal-spatial coverage for a variety of Earth science products. Our research is centered on using Deep Learning for automatically generating transformations for increasing the spatial resolution of remotely sensed images with different spatial, spectral, and temporal resolutions. One of the most important steps in using images from multiple sensors is to transform the different image layers into the same spatial resolution, preferably the highest spatial resolution, without compromising the spectral information. Recent advances in Deep Learning have shown that CNNs can be used to effectively and efficiently upscale or enhance the spatial resolution of multispectral images with the use of an auxiliary data source such as a high spatial resolution panchromatic image. In contrast, we are using both the spatial and spectral details inherent in low spatial resolution multispectral images for image enhancement without the use of a panchromatic image. This presentation will discuss how this technology will benefit many Earth Science applications that use remotely sensed images with moderate spatial resolutions.
NASA Astrophysics Data System (ADS)
Taruttis, Adrian; Razansky, Daniel; Ntziachristos, Vasilis
2012-02-01
Optoacoustic imaging has enabled the visualization of optical contrast at high resolutions in deep tissue. Our Multispectral optoacoustic tomography (MSOT) imaging results reveal internal tissue heterogeneity, where the underlying distribution of specific endogenous and exogenous sources of absorption can be resolved in detail. Technical advances in cardiac imaging allow motion-resolved multispectral measurements of the heart, opening the way for studies of cardiovascular disease. We further demonstrate the fast characterization of the pharmacokinetic profiles of lightabsorbing agents. Overall, our MSOT findings indicate new possibilities in high resolution imaging of functional and molecular parameters.
Solid state high resolution multi-spectral imager CCD test phase
NASA Technical Reports Server (NTRS)
1973-01-01
The program consisted of measuring the performance characteristics of charge coupled linear imaging devices, and a study defining a multispectral imaging system employing advanced solid state photodetection techniques.
Fusion of multi-spectral and panchromatic images based on 2D-PWVD and SSIM
NASA Astrophysics Data System (ADS)
Tan, Dongjie; Liu, Yi; Hou, Ruonan; Xue, Bindang
2016-03-01
A combined method using 2D pseudo Wigner-Ville distribution (2D-PWVD) and structural similarity(SSIM) index is proposed for fusion of low resolution multi-spectral (MS) image and high resolution panchromatic (PAN) image. First, the intensity component of multi-spectral image is extracted with generalized IHS transform. Then, the spectrum diagrams of the intensity components of multi-spectral image and panchromatic image are obtained with 2D-PWVD. Different fusion rules are designed for different frequency information of the spectrum diagrams. SSIM index is used to evaluate the high frequency information of the spectrum diagrams for assigning the weights in the fusion processing adaptively. After the new spectrum diagram is achieved according to the fusion rule, the final fusion image can be obtained by inverse 2D-PWVD and inverse GIHS transform. Experimental results show that, the proposed method can obtain high quality fusion images.
Computational multispectral video imaging [Invited].
Wang, Peng; Menon, Rajesh
2018-01-01
Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.
USDA-ARS?s Scientific Manuscript database
Using unmanned aircraft systems (UAS) as remote sensing platforms offers the unique ability for repeated deployment for acquisition of high temporal resolution data at very high spatial resolution. Most image acquisitions from UAS have been in the visible bands, while multispectral remote sensing ap...
Results of the spatial resolution simulation for multispectral data (resolution brochures)
NASA Technical Reports Server (NTRS)
1982-01-01
The variable information content of Earth Resource products at different levels of spatial resolution and in different spectral bands is addressed. A low-cost brochure that scientists and laymen could use to visualize the effects of increasing the spatial resolution of multispectral scanner images was produced.
A New Pansharpening Method Based on Spatial and Spectral Sparsity Priors.
He, Xiyan; Condat, Laurent; Bioucas-Diaz, Jose; Chanussot, Jocelyn; Xia, Junshi
2014-06-27
The development of multisensor systems in recent years has led to great increase in the amount of available remote sensing data. Image fusion techniques aim at inferring high quality images of a given area from degraded versions of the same area obtained by multiple sensors. This paper focuses on pansharpening, which is the inference of a high spatial resolution multispectral image from two degraded versions with complementary spectral and spatial resolution characteristics: a) a low spatial resolution multispectral image; and b) a high spatial resolution panchromatic image. We introduce a new variational model based on spatial and spectral sparsity priors for the fusion. In the spectral domain we encourage low-rank structure, whereas in the spatial domain we promote sparsity on the local differences. Given the fact that both panchromatic and multispectral images are integrations of the underlying continuous spectra using different channel responses, we propose to exploit appropriate regularizations based on both spatial and spectral links between panchromatic and the fused multispectral images. A weighted version of the vector Total Variation (TV) norm of the data matrix is employed to align the spatial information of the fused image with that of the panchromatic image. With regard to spectral information, two different types of regularization are proposed to promote a soft constraint on the linear dependence between the panchromatic and the fused multispectral images. The first one estimates directly the linear coefficients from the observed panchromatic and low resolution multispectral images by Linear Regression (LR) while the second one employs the Principal Component Pursuit (PCP) to obtain a robust recovery of the underlying low-rank structure. We also show that the two regularizers are strongly related. The basic idea of both regularizers is that the fused image should have low-rank and preserve edge locations. We use a variation of the recently proposed Split Augmented Lagrangian Shrinkage (SALSA) algorithm to effectively solve the proposed variational formulations. Experimental results on simulated and real remote sensing images show the effectiveness of the proposed pansharpening method compared to the state-of-the-art.
NASA Technical Reports Server (NTRS)
Blonksi, Slawomir; Gasser, Gerald; Russell, Jeffrey; Ryan, Robert; Terrie, Greg; Zanoni, Vicki
2001-01-01
Multispectral data requirements for Earth science applications are not always studied rigorously studied before a new remote sensing system is designed. A study of the spatial resolution, spectral bandpasses, and radiometric sensitivity requirements of real-world applications would focus the design onto providing maximum benefits to the end-user community. To support systematic studies of multispectral data requirements, the Applications Research Toolbox (ART) has been developed at NASA's Stennis Space Center. The ART software allows users to create and assess simulated datasets while varying a wide range of system parameters. The simulations are based on data acquired by existing multispectral and hyperspectral instruments. The produced datasets can be further evaluated for specific end-user applications. Spectral synthesis of multispectral images from hyperspectral data is a key part of the ART software. In this process, hyperspectral image cubes are transformed into multispectral imagery without changes in spatial sampling and resolution. The transformation algorithm takes into account spectral responses of both the synthesized, broad, multispectral bands and the utilized, narrow, hyperspectral bands. To validate the spectral synthesis algorithm, simulated multispectral images are compared with images collected near-coincidentally by the Landsat 7 ETM+ and the EO-1 ALI instruments. Hyperspectral images acquired with the airborne AVIRIS instrument and with the Hyperion instrument onboard the EO-1 satellite were used as input data to the presented simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Getman, Daniel J
2008-01-01
Many attempts to observe changes in terrestrial systems over time would be significantly enhanced if it were possible to improve the accuracy of classifications of low-resolution historic satellite data. In an effort to examine improving the accuracy of historic satellite image classification by combining satellite and air photo data, two experiments were undertaken in which low-resolution multispectral data and high-resolution panchromatic data were combined and then classified using the ECHO spectral-spatial image classification algorithm and the Maximum Likelihood technique. The multispectral data consisted of 6 multispectral channels (30-meter pixel resolution) from Landsat 7. These data were augmented with panchromatic datamore » (15m pixel resolution) from Landsat 7 in the first experiment, and with a mosaic of digital aerial photography (1m pixel resolution) in the second. The addition of the Landsat 7 panchromatic data provided a significant improvement in the accuracy of classifications made using the ECHO algorithm. Although the inclusion of aerial photography provided an improvement in accuracy, this improvement was only statistically significant at a 40-60% level. These results suggest that once error levels associated with combining aerial photography and multispectral satellite data are reduced, this approach has the potential to significantly enhance the precision and accuracy of classifications made using historic remotely sensed data, as a way to extend the time range of efforts to track temporal changes in terrestrial systems.« less
NASA Astrophysics Data System (ADS)
McMackin, Lenore; Herman, Matthew A.; Weston, Tyler
2016-02-01
We present the design of a multi-spectral imager built using the architecture of the single-pixel camera. The architecture is enabled by the novel sampling theory of compressive sensing implemented optically using the Texas Instruments DLP™ micro-mirror array. The array not only implements spatial modulation necessary for compressive imaging but also provides unique diffractive spectral features that result in a multi-spectral, high-spatial resolution imager design. The new camera design provides multi-spectral imagery in a wavelength range that extends from the visible to the shortwave infrared without reduction in spatial resolution. In addition to the compressive imaging spectrometer design, we present a diffractive model of the architecture that allows us to predict a variety of detailed functional spatial and spectral design features. We present modeling results, architectural design and experimental results that prove the concept.
NASA Astrophysics Data System (ADS)
Wicaksono, Pramaditya; Salivian Wisnu Kumara, Ignatius; Kamal, Muhammad; Afif Fauzan, Muhammad; Zhafarina, Zhafirah; Agus Nurswantoro, Dwi; Noviaris Yogyantoro, Rifka
2017-12-01
Although spectrally different, seagrass species may not be able to be mapped from multispectral remote sensing images due to the limitation of their spectral resolution. Therefore, it is important to quantitatively assess the possibility of mapping seagrass species using multispectral images by resampling seagrass species spectra to multispectral bands. Seagrass species spectra were measured on harvested seagrass leaves. Spectral resolution of multispectral images used in this research was adopted from WorldView-2, Quickbird, Sentinel-2A, ASTER VNIR, and Landsat 8 OLI. These images are widely available and can be a good representative and baseline for previous or future remote sensing images. Seagrass species considered in this research are Enhalus acoroides (Ea), Thalassodendron ciliatum (Tc), Thalassia hemprichii (Th), Cymodocea rotundata (Cr), Cymodocea serrulata (Cs), Halodule uninervis (Hu), Halodule pinifolia (Hp), Syringodum isoetifolium (Si), Halophila ovalis (Ho), and Halophila minor (Hm). Multispectral resampling analysis indicate that the resampled spectra exhibit similar shape and pattern with the original spectra but less precise, and they lose the unique absorption feature of seagrass species. Relying on spectral bands alone, multispectral image is not effective in mapping these seagrass species individually, which is shown by the poor and inconsistent result of Spectral Angle Mapper (SAM) classification technique in classifying seagrass species using seagrass species spectra as pure endmember. Only Sentinel-2A produced acceptable classification result using SAM.
NASA Astrophysics Data System (ADS)
Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz
2017-04-01
This paper proposes three multisharpening approaches to enhance the spatial resolution of urban hyperspectral remote sensing images. These approaches, related to linear-quadratic spectral unmixing techniques, use a linear-quadratic nonnegative matrix factorization (NMF) multiplicative algorithm. These methods begin by unmixing the observable high-spectral/low-spatial resolution hyperspectral and high-spatial/low-spectral resolution multispectral images. The obtained high-spectral/high-spatial resolution features are then recombined, according to the linear-quadratic mixing model, to obtain an unobservable multisharpened high-spectral/high-spatial resolution hyperspectral image. In the first designed approach, hyperspectral and multispectral variables are independently optimized, once they have been coherently initialized. These variables are alternately updated in the second designed approach. In the third approach, the considered hyperspectral and multispectral variables are jointly updated. Experiments, using synthetic and real data, are conducted to assess the efficiency, in spatial and spectral domains, of the designed approaches and of linear NMF-based approaches from the literature. Experimental results show that the designed methods globally yield very satisfactory spectral and spatial fidelities for the multisharpened hyperspectral data. They also prove that these methods significantly outperform the used literature approaches.
Multispectral multisensor image fusion using wavelet transforms
Lemeshewsky, George P.
1999-01-01
Fusion techniques can be applied to multispectral and higher spatial resolution panchromatic images to create a composite image that is easier to interpret than the individual images. Wavelet transform-based multisensor, multiresolution fusion (a type of band sharpening) was applied to Landsat thematic mapper (TM) multispectral and coregistered higher resolution SPOT panchromatic images. The objective was to obtain increased spatial resolution, false color composite products to support the interpretation of land cover types wherein the spectral characteristics of the imagery are preserved to provide the spectral clues needed for interpretation. Since the fusion process should not introduce artifacts, a shift invariant implementation of the discrete wavelet transform (SIDWT) was used. These results were compared with those using the shift variant, discrete wavelet transform (DWT). Overall, the process includes a hue, saturation, and value color space transform to minimize color changes, and a reported point-wise maximum selection rule to combine transform coefficients. The performance of fusion based on the SIDWT and DWT was evaluated with a simulated TM 30-m spatial resolution test image and a higher resolution reference. Simulated imagery was made by blurring higher resolution color-infrared photography with the TM sensors' point spread function. The SIDWT based technique produced imagery with fewer artifacts and lower error between fused images and the full resolution reference. Image examples with TM and SPOT 10-m panchromatic illustrate the reduction in artifacts due to the SIDWT based fusion.
NASA Astrophysics Data System (ADS)
Li, Jiao; Zhang, Songhe; Chekkoury, Andrei; Glasl, Sarah; Vetschera, Paul; Koberstein-Schwarz, Benno; Omar, Murad; Ntziachristos, Vasilis
2017-03-01
Multispectral optoacoustic mesoscopy (MSOM) has been recently introduced for cancer imaging, it has the potential for high resolution imaging of cancer development in vivo, at depths beyond the diffusion limit. Based on spectral features, optoacoustic imaging is capable of visualizing angiogenesis and imaging cancer heterogeneity of malignant tumors through endogenous hemoglobin. However, high-resolution structural and functional imaging of whole tumor mass is limited by modest penetration and image quality, due to the insufficient capability of ultrasound detectors and the twodimensional scan geometry. In this study, we introduce a novel multi-spectral optoacoustic mesoscopy (MSOM) for imaging subcutaneous or orthotopic tumors implanted in lab mice, with the high-frequency ultrasound linear array and a conical scanning geometry. Detailed volumetric images of vasculature and oxygen saturation of tissue in the entire tumors are obtained in vivo, at depths up to 10 mm with the desirable spatial resolutions approaching 70μm. This unprecedented performance enables the visualization of vasculature morphology and hypoxia conditions has been verified with ex vivo studies. These findings demonstrate the potential of MSOM for preclinical oncological studies in deep solid tumors to facilitate the characterization of tumor's angiogenesis and the evaluation of treatment strategies.
Li, Hanlun; Zhang, Aiwu; Hu, Shaoxing
2015-01-01
This paper describes an airborne high resolution four-camera multispectral system which mainly consists of four identical monochrome cameras equipped with four interchangeable bandpass filters. For this multispectral system, an automatic multispectral data composing method was proposed. The homography registration model was chosen, and the scale-invariant feature transform (SIFT) and random sample consensus (RANSAC) were used to generate matching points. For the difficult registration problem between visible band images and near-infrared band images in cases lacking manmade objects, we presented an effective method based on the structural characteristics of the system. Experiments show that our method can acquire high quality multispectral images and the band-to-band alignment error of the composed multiple spectral images is less than 2.5 pixels. PMID:26205264
Time-of-Flight Microwave Camera.
Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh
2015-10-05
Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable "stealth" regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows "camera-like" behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.
NASA Astrophysics Data System (ADS)
Taruttis, Adrian; Herzog, Eva; Razansky, Daniel; Ntziachristos, Vasilis
2011-03-01
Multispectral Optoacoustic Tomography (MSOT) is an emerging technique for high resolution macroscopic imaging with optical and molecular contrast. We present cardiovascular imaging results from a multi-element real-time MSOT system recently developed for studies on small animals. Anatomical features relevant to cardiovascular disease, such as the carotid arteries, the aorta and the heart, are imaged in mice. The system's fast acquisition time, in tens of microseconds, allows images free of motion artifacts from heartbeat and respiration. Additionally, we present in-vivo detection of optical imaging agents, gold nanorods, at high spatial and temporal resolution, paving the way for molecular imaging applications.
High performance multi-spectral interrogation for surface plasmon resonance imaging sensors.
Sereda, A; Moreau, J; Canva, M; Maillart, E
2014-04-15
Surface plasmon resonance (SPR) sensing has proven to be a valuable tool in the field of surface interactions characterization, especially for biomedical applications where label-free techniques are of particular interest. In order to approach the theoretical resolution limit, most SPR-based systems have turned to either angular or spectral interrogation modes, which both offer very accurate real-time measurements, but at the expense of the 2-dimensional imaging capability, therefore decreasing the data throughput. In this article, we show numerically and experimentally how to combine the multi-spectral interrogation technique with 2D-imaging, while finding an optimum in terms of resolution, accuracy, acquisition speed and reduction in data dispersion with respect to the classical reflectivity interrogation mode. This multi-spectral interrogation methodology is based on a robust five parameter fitting of the spectral reflectivity curve which enables monitoring of the reflectivity spectral shift with a resolution of the order of ten picometers, and using only five wavelength measurements per point. In fine, such multi-spectral based plasmonic imaging system allows biomolecular interaction monitoring in a linear regime independently of variations of buffer optical index, which is illustrated on a DNA-DNA model case. © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Neukum, Gerhard; Jaumann, Ralf; Scholten, Frank; Gwinner, Klaus
2017-11-01
At the Institute of Space Sensor Technology and Planetary Exploration of the German Aerospace Center (DLR) the High Resolution Stereo Camera (HRSC) has been designed for international missions to planet Mars. For more than three years an airborne version of this camera, the HRSC-A, has been successfully applied in many flight campaigns and in a variety of different applications. It combines 3D-capabilities and high resolution with multispectral data acquisition. Variable resolutions depending on the camera control settings can be generated. A high-end GPS/INS system in combination with the multi-angle image information yields precise and high-frequent orientation data for the acquired image lines. In order to handle these data a completely automated photogrammetric processing system has been developed, and allows to generate multispectral 3D-image products for large areas and with accuracies for planimetry and height in the decimeter range. This accuracy has been confirmed by detailed investigations.
Wang, Guizhou; Liu, Jianbo; He, Guojin
2013-01-01
This paper presents a new classification method for high-spatial-resolution remote sensing images based on a strategic mechanism of spatial mapping and reclassification. The proposed method includes four steps. First, the multispectral image is classified by a traditional pixel-based classification method (support vector machine). Second, the panchromatic image is subdivided by watershed segmentation. Third, the pixel-based multispectral image classification result is mapped to the panchromatic segmentation result based on a spatial mapping mechanism and the area dominant principle. During the mapping process, an area proportion threshold is set, and the regional property is defined as unclassified if the maximum area proportion does not surpass the threshold. Finally, unclassified regions are reclassified based on spectral information using the minimum distance to mean algorithm. Experimental results show that the classification method for high-spatial-resolution remote sensing images based on the spatial mapping mechanism and reclassification strategy can make use of both panchromatic and multispectral information, integrate the pixel- and object-based classification methods, and improve classification accuracy. PMID:24453808
Enhancing hyperspectral spatial resolution using multispectral image fusion: A wavelet approach
NASA Astrophysics Data System (ADS)
Jazaeri, Amin
High spectral and spatial resolution images have a significant impact in remote sensing applications. Because both spatial and spectral resolutions of spaceborne sensors are fixed by design and it is not possible to further increase the spatial or spectral resolution, techniques such as image fusion must be applied to achieve such goals. This dissertation introduces the concept of wavelet fusion between hyperspectral and multispectral sensors in order to enhance the spectral and spatial resolution of a hyperspectral image. To test the robustness of this concept, images from Hyperion (hyperspectral sensor) and Advanced Land Imager (multispectral sensor) were first co-registered and then fused using different wavelet algorithms. A regression-based fusion algorithm was also implemented for comparison purposes. The results show that the fused images using a combined bi-linear wavelet-regression algorithm have less error than other methods when compared to the ground truth. In addition, a combined regression-wavelet algorithm shows more immunity to misalignment of the pixels due to the lack of proper registration. The quantitative measures of average mean square error show that the performance of wavelet-based methods degrades when the spatial resolution of hyperspectral images becomes eight times less than its corresponding multispectral image. Regardless of what method of fusion is utilized, the main challenge in image fusion is image registration, which is also a very time intensive process. Because the combined regression wavelet technique is computationally expensive, a hybrid technique based on regression and wavelet methods was also implemented to decrease computational overhead. However, the gain in faster computation was offset by the introduction of more error in the outcome. The secondary objective of this dissertation is to examine the feasibility and sensor requirements for image fusion for future NASA missions in order to be able to perform onboard image fusion. In this process, the main challenge of image registration was resolved by registering the input images using transformation matrices of previously acquired data. The composite image resulted from the fusion process remarkably matched the ground truth, indicating the possibility of real time onboard fusion processing.
Fourier Spectral Filter Array for Optimal Multispectral Imaging.
Jia, Jie; Barnard, Kenneth J; Hirakawa, Keigo
2016-04-01
Limitations to existing multispectral imaging modalities include speed, cost, range, spatial resolution, and application-specific system designs that lack versatility of the hyperspectral imaging modalities. In this paper, we propose a novel general-purpose single-shot passive multispectral imaging modality. Central to this design is a new type of spectral filter array (SFA) based not on the notion of spatially multiplexing narrowband filters, but instead aimed at enabling single-shot Fourier transform spectroscopy. We refer to this new SFA pattern as Fourier SFA, and we prove that this design solves the problem of optimally sampling the hyperspectral image data.
NASA Technical Reports Server (NTRS)
1998-01-01
Under a Jet Propulsion Laboratory SBIR (Small Business Innovative Research), Cambridge Research and Instrumentation Inc., developed a new class of filters for the construction of small, low-cost multispectral imagers. The VariSpec liquid crystal enables users to obtain multi-spectral, ultra-high resolution images using a monochrome CCD (charge coupled device) camera. Application areas include biomedical imaging, remote sensing, and machine vision.
Pansharpening Techniques to Detect Mass Monument Damaging in Iraq
NASA Astrophysics Data System (ADS)
Baiocchi, V.; Bianchi, A.; Maddaluno, C.; Vidale, M.
2017-05-01
The recent mass destructions of monuments in Iraq cannot be monitored with the terrestrial survey methodologies, for obvious reasons of safety. For the same reasons, it's not advisable the use of classical aerial photogrammetry, so it was obvious to think to the use of multispectral Very High Resolution (VHR) satellite imagery. Nowadays VHR satellite images resolutions are very near airborne photogrammetrical images and usually they are acquired in multispectral mode. The combination of the various bands of the images is called pan-sharpening and it can be carried on using different algorithms and strategies. The correct pansharpening methodology, for a specific image, must be chosen considering the specific multispectral characteristics of the satellite used and the particular application. In this paper a first definition of guidelines for the use of VHR multispectral imagery to detect monument destruction in unsafe area, is reported. The proposed methodology, agreed with UNESCO and soon to be used in Libya for the coastal area, has produced a first report delivered to the Iraqi authorities. Some of the most evident examples are reported to show the possible capabilities of identification of damages using VHR images.
Time-of-Flight Microwave Camera
Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh
2015-01-01
Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz–12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum. PMID:26434598
Time-of-Flight Microwave Camera
NASA Astrophysics Data System (ADS)
Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh
2015-10-01
Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.
Spatial arrangement of color filter array for multispectral image acquisition
NASA Astrophysics Data System (ADS)
Shrestha, Raju; Hardeberg, Jon Y.; Khan, Rahat
2011-03-01
In the past few years there has been a significant volume of research work carried out in the field of multispectral image acquisition. The focus of most of these has been to facilitate a type of multispectral image acquisition systems that usually requires multiple subsequent shots (e.g. systems based on filter wheels, liquid crystal tunable filters, or active lighting). Recently, an alternative approach for one-shot multispectral image acquisition has been proposed; based on an extension of the color filter array (CFA) standard to produce more than three channels. We can thus introduce the concept of multispectral color filter array (MCFA). But this field has not been much explored, particularly little focus has been given in developing systems which focuses on the reconstruction of scene spectral reflectance. In this paper, we have explored how the spatial arrangement of multispectral color filter array affects the acquisition accuracy with the construction of MCFAs of different sizes. We have simulated acquisitions of several spectral scenes using different number of filters/channels, and compared the results with those obtained by the conventional regular MCFA arrangement, evaluating the precision of the reconstructed scene spectral reflectance in terms of spectral RMS error, and colorimetric ▵E*ab color differences. It has been found that the precision and the the quality of the reconstructed images are significantly influenced by the spatial arrangement of the MCFA and the effect will be more and more prominent with the increase in the number of channels. We believe that MCFA-based systems can be a viable alternative for affordable acquisition of multispectral color images, in particular for applications where spatial resolution can be traded off for spectral resolution. We have shown that the spatial arrangement of the array is an important design issue.
Dabo-Niang, S; Zoueu, J T
2012-09-01
In this communication, we demonstrate how kriging, combine with multispectral and multimodal microscopy can enhance the resolution of malaria-infected images and provide more details on their composition, for analysis and diagnosis. The results of this interpolation applied to the two principal components of multispectral and multimodal images illustrate that the examination of the content of Plasmodium falciparum infected human erythrocyte is improved. © 2012 The Authors Journal of Microscopy © 2012 Royal Microscopical Society.
Hybrid Image Fusion for Sharpness Enhancement of Multi-Spectral Lunar Images
NASA Astrophysics Data System (ADS)
Awumah, Anna; Mahanti, Prasun; Robinson, Mark
2016-10-01
Image fusion enhances the sharpness of a multi-spectral (MS) image by incorporating spatial details from a higher-resolution panchromatic (Pan) image [1,2]. Known applications of image fusion for planetary images are rare, although image fusion is well-known for its applications to Earth-based remote sensing. In a recent work [3], six different image fusion algorithms were implemented and their performances were verified with images from the Lunar Reconnaissance Orbiter (LRO) Camera. The image fusion procedure obtained a high-resolution multi-spectral (HRMS) product from the LRO Narrow Angle Camera (used as Pan) and LRO Wide Angle Camera (used as MS) images. The results showed that the Intensity-Hue-Saturation (IHS) algorithm results in a high-spatial quality product while the Wavelet-based image fusion algorithm best preserves spectral quality among all the algorithms. In this work we show the results of a hybrid IHS-Wavelet image fusion algorithm when applied to LROC MS images. The hybrid method provides the best HRMS product - both in terms of spatial resolution and preservation of spectral details. Results from hybrid image fusion can enable new science and increase the science return from existing LROC images.[1] Pohl, Cle, and John L. Van Genderen. "Review article multisensor image fusion in remote sensing: concepts, methods and applications." International journal of remote sensing 19.5 (1998): 823-854.[2] Zhang, Yun. "Understanding image fusion." Photogramm. Eng. Remote Sens 70.6 (2004): 657-661.[3] Mahanti, Prasun et al. "Enhancement of spatial resolution of the LROC Wide Angle Camera images." Archives, XXIII ISPRS Congress Archives (2016).
Davis, Philip A.; Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the South Bamyan mineral district, which has areas with a spectral reflectance anomaly that require field investigation. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008),but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for South Bamyan) and the WGS84 datum. The final image mosaics for the South Bamyan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Higher resolution satellite remote sensing and the impact on image mapping
Watkins, Allen H.; Thormodsgard, June M.
1987-01-01
Recent advances in spatial, spectral, and temporal resolution of civil land remote sensing satellite data are presenting new opportunities for image mapping applications. The U.S. Geological Survey's experimental satellite image mapping program is evolving toward larger scale image map products with increased information content as a result of improved image processing techniques and increased resolution. Thematic mapper data are being used to produce experimental image maps at 1:100,000 scale that meet established U.S. and European map accuracy standards. Availability of high quality, cloud-free, 30-meter ground resolution multispectral data from the Landsat thematic mapper sensor, along with 10-meter ground resolution panchromatic and 20-meter ground resolution multispectral data from the recently launched French SPOT satellite, present new cartographic and image processing challenges.The need to fully exploit these higher resolution data increases the complexity of processing the images into large-scale image maps. The removal of radiometric artifacts and noise prior to geometric correction can be accomplished by using a variety of image processing filters and transforms. Sensor modeling and image restoration techniques allow maximum retention of spatial and radiometric information. An optimum combination of spectral information and spatial resolution can be obtained by merging different sensor types. These processing techniques are discussed and examples are presented.
Sousa, Daniel; Small, Christopher
2018-02-14
Planned hyperspectral satellite missions and the decreased revisit time of multispectral imaging offer the potential for data fusion to leverage both the spectral resolution of hyperspectral sensors and the temporal resolution of multispectral constellations. Hyperspectral imagery can also be used to better understand fundamental properties of multispectral data. In this analysis, we use five flight lines from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) archive with coincident Landsat 8 acquisitions over a spectrally diverse region of California to address the following questions: (1) How much of the spectral dimensionality of hyperspectral data is captured in multispectral data?; (2) Is the characteristic pyramidal structure of the multispectral feature space also present in the low order dimensions of the hyperspectral feature space at comparable spatial scales?; (3) How much variability in rock and soil substrate endmembers (EMs) present in hyperspectral data is captured by multispectral sensors? We find nearly identical partitions of variance, low-order feature space topologies, and EM spectra for hyperspectral and multispectral image composites. The resulting feature spaces and EMs are also very similar to those from previous global multispectral analyses, implying that the fundamental structure of the global feature space is present in our relatively small spatial subset of California. Finally, we find that the multispectral dataset well represents the substrate EM variability present in the study area - despite its inability to resolve narrow band absorptions. We observe a tentative but consistent physical relationship between the gradation of substrate reflectance in the feature space and the gradation of sand versus clay content in the soil classification system.
Small, Christopher
2018-01-01
Planned hyperspectral satellite missions and the decreased revisit time of multispectral imaging offer the potential for data fusion to leverage both the spectral resolution of hyperspectral sensors and the temporal resolution of multispectral constellations. Hyperspectral imagery can also be used to better understand fundamental properties of multispectral data. In this analysis, we use five flight lines from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) archive with coincident Landsat 8 acquisitions over a spectrally diverse region of California to address the following questions: (1) How much of the spectral dimensionality of hyperspectral data is captured in multispectral data?; (2) Is the characteristic pyramidal structure of the multispectral feature space also present in the low order dimensions of the hyperspectral feature space at comparable spatial scales?; (3) How much variability in rock and soil substrate endmembers (EMs) present in hyperspectral data is captured by multispectral sensors? We find nearly identical partitions of variance, low-order feature space topologies, and EM spectra for hyperspectral and multispectral image composites. The resulting feature spaces and EMs are also very similar to those from previous global multispectral analyses, implying that the fundamental structure of the global feature space is present in our relatively small spatial subset of California. Finally, we find that the multispectral dataset well represents the substrate EM variability present in the study area – despite its inability to resolve narrow band absorptions. We observe a tentative but consistent physical relationship between the gradation of substrate reflectance in the feature space and the gradation of sand versus clay content in the soil classification system. PMID:29443900
Davis, Philip A.; Cagney, Laura E.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Balkhab mineral district, which has copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Balkhab) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the Balkhab area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Balkhab study area, one subarea was designated for detailed field investigations (that is, the Balkhab Prospect subarea); this subarea was extracted from the area's image mosaic and is provided as separate embedded geotiff images.
Davis, Philip A.; Cagney, Laura E.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Katawas mineral district, which has gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©AXA, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Katawas) and the WGS84 datum. The final image mosaics are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Katawas study area, one subarea was designated for detailed field investigation (that is, the Gold subarea); this subarea was extracted from the area's image mosaic and is provided as a separate embedded geotiff image.
Davis, Philip A.; Cagney, Laura E.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the North Takhar mineral district, which has placer gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for North Takhar) and the WGS84 datum. The final image mosaics were subdivided into nine overlapping tiles or quadrants because of the large size of the target area. The nine image tiles (or quadrants) for the North Takhar area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Cagney, Laura E.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Baghlan mineral district, which has industrial clay and gypsum deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2006, 2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Baghlan) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the Baghlan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Uruzgan mineral district, which has tin and tungsten deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2008, 2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Uruzgan) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Uruzgan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Cagney, Laura E.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the South Helmand mineral district, which has travertine deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2008, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (41 for South Helmand) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the South Helmand area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Cagney, Laura E.; Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Bakhud mineral district, which has industrial fluorite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (41 for Bakhud) and the WGS84 datum. The final image mosaics were subdivided into nine overlapping tiles or quadrants because of the large size of the target area. The nine image tiles (or quadrants) for the Bakhud area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Cagney, Laura E.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Takhar mineral district, which has industrial evaporite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Takhar) and the WGS84 datum. The final image mosaics for the Takhar area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Parwan mineral district, which has gold and copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006, 2007), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Parwan) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the North Bamyan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.
2014-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ghazni2 mineral district, which has spectral reflectance anomalies indicative of gold, mercury, and sulfur deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA, 2008, 2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Ghazni2) and the WGS84 datum. The images for the Ghazni2 area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.
2014-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ghazni1 mineral district, which has spectral reflectance anomalies indicative of clay, aluminum, gold, silver, mercury, and sulfur deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA, 2008, 2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Ghazni1) and the WGS84 datum. The images for the Ghazni1 area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Accuracy comparison in mapping water bodies using Landsat images and Google Earth Images
NASA Astrophysics Data System (ADS)
Zhou, Z.; Zhou, X.
2016-12-01
A lot of research has been done for the extraction of water bodies with multiple satellite images. The Water Indexes with the use of multi-spectral images are the mostly used methods for the water bodies' extraction. In order to extract area of water bodies from satellite images, accuracy may depend on the spatial resolution of images and relative size of the water bodies. To quantify the impact of spatial resolution and size (major and minor lengths) of the water bodies on the accuracy of water area extraction, we use Georgetown Lake, Montana and coalbed methane (CBM) water retention ponds in the Montana Powder River Basin as test sites to evaluate the impact of spatial resolution and the size of water bodies on water area extraction. Data sources used include Landsat images and Google Earth images covering both large water bodies and small ponds. Firstly we used water indices to extract water coverage from Landsat images for both large lake and small ponds. Secondly we used a newly developed visible-index method to extract water coverage from Google Earth images covering both large lake and small ponds. Thirdly, we used the image fusion method in which the Google Earth Images are fused with multi-spectral Landsat images to obtain multi-spectral images of the same high spatial resolution as the Google earth images. The actual area of the lake and ponds are measured using GPS surveys. Results will be compared and the optimal method will be selected for water body extraction.
NASA Astrophysics Data System (ADS)
Saager, Rolf B.; Baldado, Melissa L.; Rowland, Rebecca A.; Kelly, Kristen M.; Durkin, Anthony J.
2018-04-01
With recent proliferation in compact and/or low-cost clinical multispectral imaging approaches and commercially available components, questions remain whether they adequately capture the requisite spectral content of their applications. We present a method to emulate the spectral range and resolution of a variety of multispectral imagers, based on in-vivo data acquired from spatial frequency domain spectroscopy (SFDS). This approach simulates spectral responses over 400 to 1100 nm. Comparing emulated data with full SFDS spectra of in-vivo tissue affords the opportunity to evaluate whether the sparse spectral content of these imagers can (1) account for all sources of optical contrast present (completeness) and (2) robustly separate and quantify sources of optical contrast (crosstalk). We validate the approach over a range of tissue-simulating phantoms, comparing the SFDS-based emulated spectra against measurements from an independently characterized multispectral imager. Emulated results match the imager across all phantoms (<3 % absorption, <1 % reduced scattering). In-vivo test cases (burn wounds and photoaging) illustrate how SFDS can be used to evaluate different multispectral imagers. This approach provides an in-vivo measurement method to evaluate the performance of multispectral imagers specific to their targeted clinical applications and can assist in the design and optimization of new spectral imaging devices.
Multispectral laser imaging for advanced food analysis
NASA Astrophysics Data System (ADS)
Senni, L.; Burrascano, P.; Ricci, M.
2016-07-01
A hardware-software apparatus for food inspection capable of realizing multispectral NIR laser imaging at four different wavelengths is herein discussed. The system was designed to operate in a through-transmission configuration to detect the presence of unwanted foreign bodies inside samples, whether packed or unpacked. A modified Lock-In technique was employed to counterbalance the significant signal intensity attenuation due to transmission across the sample and to extract the multispectral information more efficiently. The NIR laser wavelengths used to acquire the multispectral images can be varied to deal with different materials and to focus on specific aspects. In the present work the wavelengths were selected after a preliminary analysis to enhance the image contrast between foreign bodies and food in the sample, thus identifying the location and nature of the defects. Experimental results obtained from several specimens, with and without packaging, are presented and the multispectral image processing as well as the achievable spatial resolution of the system are discussed.
Automated road network extraction from high spatial resolution multi-spectral imagery
NASA Astrophysics Data System (ADS)
Zhang, Qiaoping
For the last three decades, the Geomatics Engineering and Computer Science communities have considered automated road network extraction from remotely-sensed imagery to be a challenging and important research topic. The main objective of this research is to investigate the theory and methodology of automated feature extraction for image-based road database creation, refinement or updating, and to develop a series of algorithms for road network extraction from high resolution multi-spectral imagery. The proposed framework for road network extraction from multi-spectral imagery begins with an image segmentation using the k-means algorithm. This step mainly concerns the exploitation of the spectral information for feature extraction. The road cluster is automatically identified using a fuzzy classifier based on a set of predefined road surface membership functions. These membership functions are established based on the general spectral signature of road pavement materials and the corresponding normalized digital numbers on each multi-spectral band. Shape descriptors of the Angular Texture Signature are defined and used to reduce the misclassifications between roads and other spectrally similar objects (e.g., crop fields, parking lots, and buildings). An iterative and localized Radon transform is developed for the extraction of road centerlines from the classified images. The purpose of the transform is to accurately and completely detect the road centerlines. It is able to find short, long, and even curvilinear lines. The input image is partitioned into a set of subset images called road component images. An iterative Radon transform is locally applied to each road component image. At each iteration, road centerline segments are detected based on an accurate estimation of the line parameters and line widths. Three localization approaches are implemented and compared using qualitative and quantitative methods. Finally, the road centerline segments are grouped into a road network. The extracted road network is evaluated against a reference dataset using a line segment matching algorithm. The entire process is unsupervised and fully automated. Based on extensive experimentation on a variety of remotely-sensed multi-spectral images, the proposed methodology achieves a moderate success in automating road network extraction from high spatial resolution multi-spectral imagery.
HPT: A High Spatial Resolution Multispectral Sensor for Microsatellite Remote Sensing
Takahashi, Yukihiro; Sakamoto, Yuji; Kuwahara, Toshinori
2018-01-01
Although nano/microsatellites have great potential as remote sensing platforms, the spatial and spectral resolutions of an optical payload instrument are limited. In this study, a high spatial resolution multispectral sensor, the High-Precision Telescope (HPT), was developed for the RISING-2 microsatellite. The HPT has four image sensors: three in the visible region of the spectrum used for the composition of true color images, and a fourth in the near-infrared region, which employs liquid crystal tunable filter (LCTF) technology for wavelength scanning. Band-to-band image registration methods have also been developed for the HPT and implemented in the image processing procedure. The processed images were compared with other satellite images, and proven to be useful in various remote sensing applications. Thus, LCTF technology can be considered an innovative tool that is suitable for future multi/hyperspectral remote sensing by nano/microsatellites. PMID:29463022
Efficient single-pixel multispectral imaging via non-mechanical spatio-spectral modulation.
Li, Ziwei; Suo, Jinli; Hu, Xuemei; Deng, Chao; Fan, Jingtao; Dai, Qionghai
2017-01-27
Combining spectral imaging with compressive sensing (CS) enables efficient data acquisition by fully utilizing the intrinsic redundancies in natural images. Current compressive multispectral imagers, which are mostly based on array sensors (e.g, CCD or CMOS), suffer from limited spectral range and relatively low photon efficiency. To address these issues, this paper reports a multispectral imaging scheme with a single-pixel detector. Inspired by the spatial resolution redundancy of current spatial light modulators (SLMs) relative to the target reconstruction, we design an all-optical spectral splitting device to spatially split the light emitted from the object into several counterparts with different spectrums. Separated spectral channels are spatially modulated simultaneously with individual codes by an SLM. This no-moving-part modulation ensures a stable and fast system, and the spatial multiplexing ensures an efficient acquisition. A proof-of-concept setup is built and validated for 8-channel multispectral imaging within 420~720 nm wavelength range on both macro and micro objects, showing a potential for efficient multispectral imager in macroscopic and biomedical applications.
High Spatial Resolution Commercial Satellite Imaging Product Characterization
NASA Technical Reports Server (NTRS)
Ryan, Robert E.; Pagnutti, Mary; Blonski, Slawomir; Ross, Kenton W.; Stnaley, Thomas
2005-01-01
NASA Stennis Space Center's Remote Sensing group has been characterizing privately owned high spatial resolution multispectral imaging systems, such as IKONOS, QuickBird, and OrbView-3. Natural and man made targets were used for spatial resolution, radiometric, and geopositional characterizations. Higher spatial resolution also presents significant adjacency effects for accurate reliable radiometry.
Multi-spectral confocal microendoscope for in-vivo imaging
NASA Astrophysics Data System (ADS)
Rouse, Andrew Robert
The concept of in-vivo multi-spectral confocal microscopy is introduced. A slit-scanning multi-spectral confocal microendoscope (MCME) was built to demonstrate the technique. The MCME employs a flexible fiber-optic catheter coupled to a custom built slit-scan confocal microscope fitted with a custom built imaging spectrometer. The catheter consists of a fiber-optic imaging bundle linked to a miniature objective and focus assembly. The design and performance of the miniature objective and focus assembly are discussed. The 3mm diameter catheter may be used on its own or routed though the instrument channel of a commercial endoscope. The confocal nature of the system provides optical sectioning with 3mum lateral resolution and 30mum axial resolution. The prism based multi-spectral detection assembly is typically configured to collect 30 spectral samples over the visible chromatic range. The spectral sampling rate varies from 4nm/pixel at 490nm to 8nm/pixel at 660nm and the minimum resolvable wavelength difference varies from 7nm to 18nm over the same spectral range. Each of these characteristics are primarily dictated by the dispersive power of the prism. The MCME is designed to examine cellular structures during optical biopsy and to exploit the diagnostic information contained within the spectral domain. The primary applications for the system include diagnosis of disease in the gastro-intestinal tract and female reproductive system. Recent data from the grayscale imaging mode are presented. Preliminary multi-spectral results from phantoms, cell cultures, and excised human tissue are presented to demonstrate the potential of in-vivo multi-spectral imaging.
Airborne multispectral detection of regrowth cotton fields
NASA Astrophysics Data System (ADS)
Westbrook, John K.; Suh, Charles P.-C.; Yang, Chenghai; Lan, Yubin; Eyster, Ritchie S.
2015-01-01
Effective methods are needed for timely areawide detection of regrowth cotton plants because boll weevils (a quarantine pest) can feed and reproduce on these plants beyond the cotton production season. Airborne multispectral images of regrowth cotton plots were acquired on several dates after three shredding (i.e., stalk destruction) dates. Linear spectral unmixing (LSU) classification was applied to high-resolution airborne multispectral images of regrowth cotton plots to estimate the minimum detectable size and subsequent growth of plants. We found that regrowth cotton fields can be identified when the mean plant width is ˜0.2 m for an image resolution of 0.1 m. LSU estimates of canopy cover of regrowth cotton plots correlated well (r2=0.81) with the ratio of mean plant width to row spacing, a surrogate measure of plant canopy cover. The height and width of regrowth plants were both well correlated (r2=0.94) with accumulated degree-days after shredding. The results will help boll weevil eradication program managers use airborne multispectral images to detect and monitor the regrowth of cotton plants after stalk destruction, and identify fields that may require further inspection and mitigation of boll weevil infestations.
Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum.
Yasuma, Fumihito; Mitsunaga, Tomoo; Iso, Daisuke; Nayar, Shree K
2010-09-01
We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the tradeoff between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters. A major problem with using such an array is that the captured image is severely under-sampled for at least some of the filter types. This leads to reconstructed images with strong aliasing. We make four contributions in this paper: 1) we present a comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera. 2) We develop a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts. 3) We demonstrate how the user can capture a single image and then control the tradeoff of spatial resolution to generate a variety of images, including monochrome, high dynamic range (HDR) monochrome, RGB, HDR RGB, and multispectral images. 4) Finally, the performance of our GAP camera has been verified using extensive simulations that use multispectral images of real world scenes. A large database of these multispectral images has been made available at http://www1.cs.columbia.edu/CAVE/projects/gap_camera/ for use by the research community.
Davis, Philip A.; Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the North Bamyan mineral district, which has copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for North Bamyan) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the North Bamyan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ahankashan mineral district, which has copper and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008, 2009, 2010),but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Ahankashan) and the WGS84 datum. The final image mosaics were subdivided into five overlapping tiles or quadrants because of the large size of the target area. The five image tiles (or quadrants) for the Ahankashan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kunduz mineral district, which has celestite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Kunduz) and the WGS84 datum. The final image mosaics were subdivided into five overlapping tiles or quadrants because of the large size of the target area. The five image tiles (or quadrants) for the Kunduz area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Tourmaline mineral district, which has tin deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Tourmaline) and the WGS84 datum. The final image mosaics were subdivided into four overlapping tiles or quadrants because of the large size of the target area. The four image tiles (or quadrants) for the Tourmaline area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Dudkash mineral district, which has industrial mineral deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Dudkash) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Dudkash area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Cagney, Laura E.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Nalbandon mineral district, which has lead and zinc deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2007, 2008, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (41 for Nalbandon) and the WGS84 datum. The final image mosaics were subdivided into ten overlapping tiles or quadrants because of the large size of the target area. The ten image tiles (or quadrants) for the Nalbandon area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Nalbandon study area, two subareas were designated for detailed field investigations (that is, the Nalbandon District and Gharghananaw-Gawmazar subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Cagney, Laura E.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Zarkashan mineral district, which has copper and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Zarkashan) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the Zarkashan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Zarkashan study area, three subareas were designated for detailed field investigations (that is, the Mine Area, Bolo Gold Prospect, and Luman-Tamaki Gold Prospect subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kandahar mineral district, which has bauxite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2006,2007,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar- elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image- registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative- reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area- enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Kandahar) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Kandahar area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Kandahar study area, two subareas were designated for detailed field investigations (that is, the Obatu-Shela and Sekhab-Zamto Kalay subareas); these subareas were extracted from the area's image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Khanneshin mineral district, which has uranium, thorium, rare-earth-element, and apatite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Khanneshin) and the WGS84 datum. The final image mosaics were subdivided into nine overlapping tiles or quadrants because of the large size of the target area. The nine image tiles (or quadrants) for the Khanneshin area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Khanneshin study area, one subarea was designated for detailed field investigations (that is, the Khanneshin volcano subarea); this subarea was extracted from the area's image mosaic and is provided as separate embedded geotiff images.
Davis, Philip A.; Cagney, Laura E.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Panjsher Valley mineral district, which has emerald and silver-iron deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2009, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Panjsher Valley) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the Panjsher Valley area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Panjsher Valley study area, two subareas were designated for detailed field investigations (that is, the Emerald and Silver-Iron subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.
2014-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Farah mineral district, which has spectral reflectance anomalies indicative of copper, zinc, lead, silver, and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA, 2007, 2008, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Farah) and the WGS84 datum. The final image mosaics were subdivided into four overlapping tiles or quadrants because of the large size of the target area. The four image tiles (or quadrants) for the Farah area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Farah study area, five subareas were designated for detailed field investigations (that is, the FarahA through FarahE subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
Multispectral image enhancement processing for microsat-borne imager
NASA Astrophysics Data System (ADS)
Sun, Jianying; Tan, Zheng; Lv, Qunbo; Pei, Linlin
2017-10-01
With the rapid development of remote sensing imaging technology, the micro satellite, one kind of tiny spacecraft, appears during the past few years. A good many studies contribute to dwarfing satellites for imaging purpose. Generally speaking, micro satellites weigh less than 100 kilograms, even less than 50 kilograms, which are slightly larger or smaller than the common miniature refrigerators. However, the optical system design is hard to be perfect due to the satellite room and weight limitation. In most cases, the unprocessed data captured by the imager on the microsatellite cannot meet the application need. Spatial resolution is the key problem. As for remote sensing applications, the higher spatial resolution of images we gain, the wider fields we can apply them. Consequently, how to utilize super resolution (SR) and image fusion to enhance the quality of imagery deserves studying. Our team, the Key Laboratory of Computational Optical Imaging Technology, Academy Opto-Electronics, is devoted to designing high-performance microsat-borne imagers and high-efficiency image processing algorithms. This paper addresses a multispectral image enhancement framework for space-borne imagery, jointing the pan-sharpening and super resolution techniques to deal with the spatial resolution shortcoming of microsatellites. We test the remote sensing images acquired by CX6-02 satellite and give the SR performance. The experiments illustrate the proposed approach provides high-quality images.
An automated procedure for detection of IDP's dwellings using VHR satellite imagery
NASA Astrophysics Data System (ADS)
Jenerowicz, Malgorzata; Kemper, Thomas; Soille, Pierre
2011-11-01
This paper presents the results for the estimation of dwellings structures in Al Salam IDP Camp, Southern Darfur, based on Very High Resolution multispectral satellite images obtained by implementation of Mathematical Morphology analysis. A series of image processing procedures, feature extraction methods and textural analysis have been applied in order to provide reliable information about dwellings structures. One of the issues in this context is related to similarity of the spectral response of thatched dwellings' roofs and the surroundings in the IDP camps, where the exploitation of multispectral information is crucial. This study shows the advantage of automatic extraction approach and highlights the importance of detailed spatial and spectral information analysis based on multi-temporal dataset. The additional data fusion of high-resolution panchromatic band with lower resolution multispectral bands of WorldView-2 satellite has positive influence on results and thereby can be useful for humanitarian aid agency, providing support of decisions and estimations of population especially in situations when frequent revisits by space imaging system are the only possibility of continued monitoring.
NASA Astrophysics Data System (ADS)
Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling
2018-02-01
This paper proposes an optimized lighting method of applying a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting method is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square method is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed method were both significantly improved. The optimum method opens up avenues for the hyperspectral imaging of biological tissue.
Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling
2018-02-01
This paper proposes an optimized lighting method of applying a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting method is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square method is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed method were both significantly improved. The optimum method opens up avenues for the hyperspectral imaging of biological tissue.
Fusion of MultiSpectral and Panchromatic Images Based on Morphological Operators.
Restaino, Rocco; Vivone, Gemine; Dalla Mura, Mauro; Chanussot, Jocelyn
2016-04-20
Nonlinear decomposition schemes constitute an alternative to classical approaches for facing the problem of data fusion. In this paper we discuss the application of this methodology to a popular remote sensing application called pansharpening, which consists in the fusion of a low resolution multispectral image and a high resolution panchromatic image. We design a complete pansharpening scheme based on the use of morphological half gradients operators and demonstrate the suitability of this algorithm through the comparison with state of the art approaches. Four datasets acquired by the Pleiades, Worldview-2, Ikonos and Geoeye-1 satellites are employed for the performance assessment, testifying the effectiveness of the proposed approach in producing top-class images with a setting independent of the specific sensor.
The Multispectral Imaging Science Working Group. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Cox, S. C. (Editor)
1982-01-01
Results of the deliberations of the six multispectral imaging science working groups (Botany, Geography, Geology, Hydrology, Imaging Science and Information Science) are summarized. Consideration was given to documenting the current state of knowledge in terrestrial remote sensing without the constraints of preconceived concepts such as possible band widths, number of bands, and radiometric or spatial resolutions of present or future systems. The findings of each working group included a discussion of desired capabilities and critical developmental issues.
NASA Astrophysics Data System (ADS)
Matsuoka, M.
2012-07-01
A considerable number of methods for pansharpening remote-sensing images have been developed to generate higher spatial resolution multispectral images by the fusion of lower resolution multispectral images and higher resolution panchromatic images. Because pansharpening alters the spectral properties of multispectral images, method selection is one of the key factors influencing the accuracy of subsequent analyses such as land-cover classification or change detection. In this study, seven pixel-based pansharpening methods (additive wavelet intensity, additive wavelet principal component, generalized Laplacian pyramid with spectral distortion minimization, generalized intensity-hue-saturation (GIHS) transform, GIHS adaptive, Gram-Schmidt spectral sharpening, and block-based synthetic variable ratio) were compared using AVNIR-2 and PRISM onboard ALOS from the viewpoint of the preservation of spectral properties of AVNIR-2. A visual comparison was made between pansharpened images generated from spatially degraded AVNIR-2 and original images over urban, agricultural, and forest areas. The similarity of the images was evaluated in terms of the image contrast, the color distinction, and the brightness of the ground objects. In the quantitative assessment, three kinds of statistical indices, correlation coefficient, ERGAS, and Q index, were calculated by band and land-cover type. These scores were relatively superior in bands 2 and 3 compared with the other two bands, especially over urban and agricultural areas. Band 4 showed a strong dependency on the land-cover type. This was attributable to the differences in the observing spectral wavelengths of the sensors and local scene variances.
The Panoramic Camera (PanCam) Instrument for the ESA ExoMars Rover
NASA Astrophysics Data System (ADS)
Griffiths, A.; Coates, A.; Jaumann, R.; Michaelis, H.; Paar, G.; Barnes, D.; Josset, J.
The recently approved ExoMars rover is the first element of the ESA Aurora programme and is slated to deliver the Pasteur exobiology payload to Mars by 2013. The 0.7 kg Panoramic Camera will provide multispectral stereo images with 65° field-of- view (1.1 mrad/pixel) and high resolution (85 µrad/pixel) monoscopic "zoom" images with 5° field-of-view. The stereo Wide Angle Cameras (WAC) are based on Beagle 2 Stereo Camera System heritage. The Panoramic Camera instrument is designed to fulfil the digital terrain mapping requirements of the mission as well as providing multispectral geological imaging, colour and stereo panoramic images, solar images for water vapour abundance and dust optical depth measurements and to observe retrieved subsurface samples before ingestion into the rest of the Pasteur payload. Additionally the High Resolution Camera (HRC) can be used for high resolution imaging of interesting targets detected in the WAC panoramas and of inaccessible locations on crater or valley walls.
Davis, Philip A.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This DS consists of the locally enhanced ALOS image mosaics for each of the 24 mineral project areas (referred to herein as areas of interest), whose locality names, locations, and main mineral occurrences are shown on the index map of Afghanistan (fig. 1). ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency, but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. PRISM image orthorectification for one-half of the target areas was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using SPARKLE logic, which is described in Davis (2006). Each of the four-band images within each resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a specified radius that was usually 500 m. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (either 41 or 42) and the WGS84 datum. Most final image mosaics were subdivided into overlapping tiles or quadrants because of the large size of the target areas. The image tiles (or quadrants) for each area of interest are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Approximately one-half of the study areas have at least one subarea designated for detailed field investigations; the subareas were extracted from the area's image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Dusar-Shaida mineral district, which has copper and tin deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’ picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’ local zone (41 for Dusar-Shaida) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Dusar-Shaida area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Dusar-Shaida study area, three subareas were designated for detailed field investigations (that is, the Dahana-Misgaran, Kaftar VMS, and Shaida subareas); these subareas were extracted from the area’ image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kundalyan mineral district, which has porphyry copper and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Kundalyan) and the WGS84 datum. The final image mosaics were subdivided into five overlapping tiles or quadrants because of the large size of the target area. The five image tiles (or quadrants) for the Kundalyan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Kundalyan study area, three subareas were designated for detailed field investigations (that is, the Baghawan-Garangh, Charsu-Ghumbad, and Kunag Skarn subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Herat mineral district, which has barium and limestone deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 1,000-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Herat) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Herat area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Herat study area, one subarea was designated for detailed field investigations (that is, the Barium-Limestone subarea); this subarea was extracted from the area's image mosaic and is provided as separate embedded geotiff images.
Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Badakhshan mineral district, which has gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2007,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Badakhshan) and the WGS84 datum. The final image mosaics were subdivided into six overlapping tiles or quadrants because of the large size of the target area. The six image tiles (or quadrants) for the Badakhshan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Badakhshan study area, three subareas were designated for detailed field investigations (that is, the Bharak, Fayz-Abad, and Ragh subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kharnak-Kanjar mineral district, which has mercury deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 1,000-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Kharnak-Kanjar) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Kharnak-Kanjar area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Kharnak-Kanjar study area, three subareas were designated for detailed field investigations (that is, the Koh-e-Katif Passaband, Panjshah-Mullayan, and Sahebdad-Khanjar subareas); these subareas were extracted from the area's image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Haji-Gak mineral district, which has iron ore deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2006,2007), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then co-registered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image-coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Haji-Gak) and the WGS84 datum. The final image mosaics were subdivided into three overlapping tiles or quadrants because of the large size of the target area. The three image tiles (or quadrants) for the Haji-Gak area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Haji-Gak study area, three subareas were designated for detailed field investigations (that is, the Haji-Gak Prospect, Farenjal, and NE Haji-Gak subareas); these subareas were extracted from the area's image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Aynak mineral district, which has copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2008,2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Aynak) and the WGS84 datum. The final image mosaics were subdivided into four overlapping tiles or quadrants because of the large size of the target area. The four image tiles (or quadrants) for the Aynak area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Aynak study area, five subareas were designated for detailed field investigations (that is, the Bakhel-Charwaz, Kelaghey-Kakhay, Kharuti-Dawrankhel, Logar Valley, and Yagh-Darra/Gul-Darra subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.; Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ghunday-Achin mineral district, which has magnesite and talc deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Ghunday-Achin) and the WGS84 datum. The final image mosaics were subdivided into six overlapping tiles or quadrants because of the large size of the target area. The six image tiles (or quadrants) for the Ghunday-Achin area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Ghunday-Achin study area, two subareas were designated for detailed field investigations (that is, the Achin-Magnesite and Ghunday-Mamahel subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
Semiconductor Laser Multi-Spectral Sensing and Imaging
Le, Han Q.; Wang, Yang
2010-01-01
Multi-spectral laser imaging is a technique that can offer a combination of the laser capability of accurate spectral sensing with the desirable features of passive multispectral imaging. The technique can be used for detection, discrimination, and identification of objects by their spectral signature. This article describes and reviews the development and evaluation of semiconductor multi-spectral laser imaging systems. Although the method is certainly not specific to any laser technology, the use of semiconductor lasers is significant with respect to practicality and affordability. More relevantly, semiconductor lasers have their own characteristics; they offer excellent wavelength diversity but usually with modest power. Thus, system design and engineering issues are analyzed for approaches and trade-offs that can make the best use of semiconductor laser capabilities in multispectral imaging. A few systems were developed and the technique was tested and evaluated on a variety of natural and man-made objects. It was shown capable of high spectral resolution imaging which, unlike non-imaging point sensing, allows detecting and discriminating objects of interest even without a priori spectroscopic knowledge of the targets. Examples include material and chemical discrimination. It was also shown capable of dealing with the complexity of interpreting diffuse scattered spectral images and produced results that could otherwise be ambiguous with conventional imaging. Examples with glucose and spectral imaging of drug pills were discussed. Lastly, the technique was shown with conventional laser spectroscopy such as wavelength modulation spectroscopy to image a gas (CO). These results suggest the versatility and power of multi-spectral laser imaging, which can be practical with the use of semiconductor lasers. PMID:22315555
Semiconductor laser multi-spectral sensing and imaging.
Le, Han Q; Wang, Yang
2010-01-01
Multi-spectral laser imaging is a technique that can offer a combination of the laser capability of accurate spectral sensing with the desirable features of passive multispectral imaging. The technique can be used for detection, discrimination, and identification of objects by their spectral signature. This article describes and reviews the development and evaluation of semiconductor multi-spectral laser imaging systems. Although the method is certainly not specific to any laser technology, the use of semiconductor lasers is significant with respect to practicality and affordability. More relevantly, semiconductor lasers have their own characteristics; they offer excellent wavelength diversity but usually with modest power. Thus, system design and engineering issues are analyzed for approaches and trade-offs that can make the best use of semiconductor laser capabilities in multispectral imaging. A few systems were developed and the technique was tested and evaluated on a variety of natural and man-made objects. It was shown capable of high spectral resolution imaging which, unlike non-imaging point sensing, allows detecting and discriminating objects of interest even without a priori spectroscopic knowledge of the targets. Examples include material and chemical discrimination. It was also shown capable of dealing with the complexity of interpreting diffuse scattered spectral images and produced results that could otherwise be ambiguous with conventional imaging. Examples with glucose and spectral imaging of drug pills were discussed. Lastly, the technique was shown with conventional laser spectroscopy such as wavelength modulation spectroscopy to image a gas (CO). These results suggest the versatility and power of multi-spectral laser imaging, which can be practical with the use of semiconductor lasers.
SPEKTROP DPU: optoelectronic platform for fast multispectral imaging
NASA Astrophysics Data System (ADS)
Graczyk, Rafal; Sitek, Piotr; Stolarski, Marcin
2010-09-01
In recent years it easy to spot and increasing need of high-quality Earth imaging in airborne and space applications. This is due fact that government and local authorities urge for up to date topological data for administrative purposes. On the other hand, interest in environmental sciences, push for ecological approach, efficient agriculture and forests management are also heavily supported by Earth images in various resolutions and spectral ranges. "SPEKTROP DPU: Opto-electronic platform for fast multi-spectral imaging" paper describes architectural datails of data processing unit, part of universal and modular platform that provides high quality imaging functionality in aerospace applications.
NASA Astrophysics Data System (ADS)
Gao, M.; Li, J.
2018-04-01
Geometric correction is an important preprocessing process in the application of GF4 PMS image. The method of geometric correction that is based on the manual selection of geometric control points is time-consuming and laborious. The more common method, based on a reference image, is automatic image registration. This method involves several steps and parameters. For the multi-spectral sensor GF4 PMS, it is necessary for us to identify the best combination of parameters and steps. This study mainly focuses on the following issues: necessity of Rational Polynomial Coefficients (RPC) correction before automatic registration, base band in the automatic registration and configuration of GF4 PMS spatial resolution.
NASA Cold Land Processes Experiment (CLPX 2002/03): Spaceborne remote sensing
Robert E. Davis; Thomas H. Painter; Don Cline; Richard Armstrong; Terry Haran; Kyle McDonald; Rick Forster; Kelly Elder
2008-01-01
This paper describes satellite data collected as part of the 2002/03 Cold Land Processes Experiment (CLPX). These data include multispectral and hyperspectral optical imaging, and passive and active microwave observations of the test areas. The CLPX multispectral optical data include the Advanced Very High Resolution Radiometer (AVHRR), the Landsat Thematic Mapper/...
NASA Astrophysics Data System (ADS)
Liebel, L.; Körner, M.
2016-06-01
In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to their natural disadvantage of a large distance between the sensor and the sensed object. Thus, methods for single-image super resolution are desirable to exceed the limits of the sensor. Apart from assisting visual inspection of datasets, post-processing operations—e.g., segmentation or feature extraction—can benefit from detailed and distinguishable structures. In this paper, we show that recently introduced state-of-the-art approaches for single-image super resolution of conventional photographs, making use of deep learning techniques, such as convolutional neural networks (CNN), can successfully be applied to remote sensing data. With a huge amount of training data available, end-to-end learning is reasonably easy to apply and can achieve results unattainable using conventional handcrafted algorithms. We trained our CNN on a specifically designed, domain-specific dataset, in order to take into account the special characteristics of multispectral remote sensing data. This dataset consists of publicly available SENTINEL-2 images featuring 13 spectral bands, a ground resolution of up to 10m, and a high radiometric resolution and thus satisfying our requirements in terms of quality and quantity. In experiments, we obtained results superior compared to competing approaches trained on generic image sets, which failed to reasonably scale satellite images with a high radiometric resolution, as well as conventional interpolation methods.
Hyperspectral Super-Resolution of Locally Low Rank Images From Complementary Multisource Data.
Veganzones, Miguel A; Simoes, Miguel; Licciardi, Giorgio; Yokoya, Naoto; Bioucas-Dias, Jose M; Chanussot, Jocelyn
2016-01-01
Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.
Combined use of LiDAR data and multispectral earth observation imagery for wetland habitat mapping
NASA Astrophysics Data System (ADS)
Rapinel, Sébastien; Hubert-Moy, Laurence; Clément, Bernard
2015-05-01
Although wetlands play a key role in controlling flooding and nonpoint source pollution, sequestering carbon and providing an abundance of ecological services, the inventory and characterization of wetland habitats are most often limited to small areas. This explains why the understanding of their ecological functioning is still insufficient for a reliable functional assessment on areas larger than a few hectares. While LiDAR data and multispectral Earth Observation (EO) images are often used separately to map wetland habitats, their combined use is currently being assessed for different habitat types. The aim of this study is to evaluate the combination of multispectral and multiseasonal imagery and LiDAR data to precisely map the distribution of wetland habitats. The image classification was performed combining an object-based approach and decision-tree modeling. Four multispectral images with high (SPOT-5) and very high spatial resolution (Quickbird, KOMPSAT-2, aerial photographs) were classified separately. Another classification was then applied integrating summer and winter multispectral image data and three layers derived from LiDAR data: vegetation height, microtopography and intensity return. The comparison of classification results shows that some habitats are better identified on the winter image and others on the summer image (overall accuracies = 58.5 and 57.6%). They also point out that classification accuracy is highly improved (overall accuracy = 86.5%) when combining LiDAR data and multispectral images. Moreover, this study highlights the advantage of integrating vegetation height, microtopography and intensity parameters in the classification process. This article demonstrates that information provided by the synergetic use of multispectral images and LiDAR data can help in wetland functional assessment
NASA Technical Reports Server (NTRS)
Manohar, Mareboyana; Tilton, James C.
1994-01-01
A progressive vector quantization (VQ) compression approach is discussed which decomposes image data into a number of levels using full search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the Advanced Very High Resolution Radiometer instrument and other Earth observation image data, and investigate the trade-offs in selecting the number of decomposition levels and codebook training method.
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Fuell, Kevin K.; LaFontaine, Frank; McGrath, Kevin; Smith, Matt
2013-01-01
Current and future satellite sensors provide remotely sensed quantities from a variety of wavelengths ranging from the visible to the passive microwave, from both geostationary and low ]Earth orbits. The NASA Short ]term Prediction Research and Transition (SPoRT) Center has a long history of providing multispectral imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard NASA fs Terra and Aqua satellites in support of NWS forecast office activities. Products from MODIS have recently been extended to include a broader suite of multispectral imagery similar to those developed by EUMETSAT, based upon the spectral channels available from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) aboard METEOSAT ]9. This broader suite includes products that discriminate between air mass types associated with synoptic ]scale features, assists in the identification of dust, and improves upon paired channel difference detection of fog and low cloud events. Future instruments will continue the availability of these products and also expand upon current capabilities. The Advanced Baseline Imager (ABI) on GOES ]R will improve the spectral, spatial, and temporal resolution of our current geostationary capabilities, and the recent launch of the Suomi National Polar ]Orbiting Partnership (S ]NPP) carries instruments such as the Visible Infrared Imager Radiometer Suite (VIIRS), the Cross ]track Infrared Sounder (CrIS), and the Advanced Technology Microwave Sounder (ATMS), which have unrivaled spectral and spatial resolution, as precursors to the JPSS era (i.e., the next generation of polar orbiting satellites. New applications from VIIRS extend multispectral composites available from MODIS and SEVIRI while adding new capabilities through incorporation of additional CrIS channels or information from the Near Constant Contrast or gDay ]Night Band h, which provides moonlit reflectance from clouds and detection of fires or city lights. This presentation will present a review of SPoRT, CIRA, and NRL collaborations regarding multispectral satellite imagery and recent applications within the operational forecasting environment
Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.; Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Nuristan mineral district, which has gem, lithium, and cesium deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. All available panchromatic images for this area had significant cloud and snow cover that precluded their use for resolution enhancement of the multispectral image data. Each of the four-band images within the 10-m image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Nuristan) and the WGS84 datum. The final image mosaics for the Nuristan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Sharpening advanced land imager multispectral data using a sensor model
Lemeshewsky, G.P.; ,
2005-01-01
The Advanced Land Imager (ALI) instrument on NASA's Earth Observing One (EO-1) satellite provides for nine spectral bands at 30m ground sample distance (GSD) and a 10m GSD panchromatic band. This report describes an image sharpening technique where the higher spatial resolution information of the panchromatic band is used to increase the spatial resolution of ALI multispectral (MS) data. To preserve the spectral characteristics, this technique combines reported deconvolution deblurring methods for the MS data with highpass filter-based fusion methods for the Pan data. The deblurring process uses the point spread function (PSF) model of the ALI sensor. Information includes calculation of the PSF from pre-launch calibration data. Performance was evaluated using simulated ALI MS data generated by degrading the spatial resolution of high resolution IKONOS satellite MS data. A quantitative measure of performance was the error between sharpened MS data and high resolution reference. This report also compares performance with that of a reported method that includes PSF information. Preliminary results indicate improved sharpening with the method reported here.
NASA Astrophysics Data System (ADS)
Miecznik, Grzegorz; Shafer, Jeff; Baugh, William M.; Bader, Brett; Karspeck, Milan; Pacifici, Fabio
2017-05-01
WorldView-3 (WV-3) is a DigitalGlobe commercial, high resolution, push-broom imaging satellite with three instruments: visible and near-infrared VNIR consisting of panchromatic (0.3m nadir GSD) plus multi-spectral (1.2m), short-wave infrared SWIR (3.7m), and multi-spectral CAVIS (30m). Nine VNIR bands, which are on one instrument, are nearly perfectly registered to each other, whereas eight SWIR bands, belonging to the second instrument, are misaligned with respect to VNIR and to each other. Geometric calibration and ortho-rectification results in a VNIR/SWIR alignment which is accurate to approximately 0.75 SWIR pixel at 3.7m GSD, whereas inter-SWIR, band to band registration is 0.3 SWIR pixel. Numerous high resolution, spectral applications, such as object classification and material identification, require more accurate registration, which can be achieved by utilizing image processing algorithms, for example Mutual Information (MI). Although MI-based co-registration algorithms are highly accurate, implementation details for automated processing can be challenging. One particular challenge is how to compute bin widths of intensity histograms, which are fundamental building blocks of MI. We solve this problem by making the bin widths proportional to instrument shot noise. Next, we show how to take advantage of multiple VNIR bands, and improve registration sensitivity to image alignment. To meet this goal, we employ Canonical Correlation Analysis, which maximizes VNIR/SWIR correlation through an optimal linear combination of VNIR bands. Finally we explore how to register images corresponding to different spatial resolutions. We show that MI computed at a low-resolution grid is more sensitive to alignment parameters than MI computed at a high-resolution grid. The proposed modifications allow us to improve VNIR/SWIR registration to better than ¼ of a SWIR pixel, as long as terrain elevation is properly accounted for, and clouds and water are masked out.
Compressive hyperspectral and multispectral imaging fusion
NASA Astrophysics Data System (ADS)
Espitia, Óscar; Castillo, Sergio; Arguello, Henry
2016-05-01
Image fusion is a valuable framework which combines two or more images of the same scene from one or multiple sensors, allowing to improve the resolution of the images and increase the interpretable content. In remote sensing a common fusion problem consists of merging hyperspectral (HS) and multispectral (MS) images that involve large amount of redundant data, which ignores the highly correlated structure of the datacube along the spatial and spectral dimensions. Compressive HS and MS systems compress the spectral data in the acquisition step allowing to reduce the data redundancy by using different sampling patterns. This work presents a compressed HS and MS image fusion approach, which uses a high dimensional joint sparse model. The joint sparse model is formulated by combining HS and MS compressive acquisition models. The high spectral and spatial resolution image is reconstructed by using sparse optimization algorithms. Different fusion spectral image scenarios are used to explore the performance of the proposed scheme. Several simulations with synthetic and real datacubes show promising results as the reliable reconstruction of a high spectral and spatial resolution image can be achieved by using as few as just the 50% of the datacube.
NASA Astrophysics Data System (ADS)
Zhao, Shaoshuai; Ni, Chen; Cao, Jing; Li, Zhengqiang; Chen, Xingfeng; Ma, Yan; Yang, Leiku; Hou, Weizhen; Qie, Lili; Ge, Bangyu; Liu, Li; Xing, Jin
2018-03-01
The remote sensing image is usually polluted by atmosphere components especially like aerosol particles. For the quantitative remote sensing applications, the radiative transfer model based atmospheric correction is used to get the reflectance with decoupling the atmosphere and surface by consuming a long computational time. The parallel computing is a solution method for the temporal acceleration. The parallel strategy which uses multi-CPU to work simultaneously is designed to do atmospheric correction for a multispectral remote sensing image. The parallel framework's flow and the main parallel body of atmospheric correction are described. Then, the multispectral remote sensing image of the Chinese Gaofen-2 satellite is used to test the acceleration efficiency. When the CPU number is increasing from 1 to 8, the computational speed is also increasing. The biggest acceleration rate is 6.5. Under the 8 CPU working mode, the whole image atmospheric correction costs 4 minutes.
NASA Astrophysics Data System (ADS)
Corucci, Linda; Masini, Andrea; Cococcioni, Marco
2011-01-01
This paper addresses bathymetry estimation from high resolution multispectral satellite images by proposing an accurate supervised method, based on a neuro-fuzzy approach. The method is applied to two Quickbird images of the same area, acquired in different years and meteorological conditions, and is validated using truth data. Performance is studied in different realistic situations of in situ data availability. The method allows to achieve a mean standard deviation of 36.7 cm for estimated water depths in the range [-18, -1] m. When only data collected along a closed path are used as a training set, a mean STD of 45 cm is obtained. The effect of both meteorological conditions and training set size reduction on the overall performance is also investigated.
D Land Cover Classification Based on Multispectral LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong
2016-06-01
Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.
Kim, Min-Gab; Kim, Jin-Yong
2018-05-01
In this paper, we introduce a method to overcome the limitation of thickness measurement of a micro-patterned thin film. A spectroscopic imaging reflectometer system that consists of an acousto-optic tunable filter, a charge-coupled-device camera, and a high-magnitude objective lens was proposed, and a stack of multispectral images was generated. To secure improved accuracy and lateral resolution in the reconstruction of a two-dimensional thin film thickness, prior to the analysis of spectral reflectance profiles from each pixel of multispectral images, the image restoration based on an iterative deconvolution algorithm was applied to compensate for image degradation caused by blurring.
NASA Astrophysics Data System (ADS)
Dube, Timothy; Mutanga, Onisimo
2015-03-01
Aboveground biomass estimation is critical in understanding forest contribution to regional carbon cycles. Despite the successful application of high spatial and spectral resolution sensors in aboveground biomass (AGB) estimation, there are challenges related to high acquisition costs, small area coverage, multicollinearity and limited availability. These challenges hamper the successful regional scale AGB quantification. The aim of this study was to assess the utility of the newly-launched medium-resolution multispectral Landsat 8 Operational Land Imager (OLI) dataset with a large swath width, in quantifying AGB in a forest plantation. We applied different sets of spectral analysis (test I: spectral bands; test II: spectral vegetation indices and test III: spectral bands + spectral vegetation indices) in testing the utility of Landsat 8 OLI using two non-parametric algorithms: stochastic gradient boosting and the random forest ensembles. The results of the study show that the medium-resolution multispectral Landsat 8 OLI dataset provides better AGB estimates for Eucalyptus dunii, Eucalyptus grandis and Pinus taeda especially when using the extracted spectral information together with the derived spectral vegetation indices. We also noted that incorporating the optimal subset of the most important selected medium-resolution multispectral Landsat 8 OLI bands improved AGB accuracies. We compared medium-resolution multispectral Landsat 8 OLI AGB estimates with Landsat 7 ETM + estimates and the latter yielded lower estimation accuracies. Overall, this study demonstrates the invaluable potential and strength of applying the relatively affordable and readily available newly-launched medium-resolution Landsat 8 OLI dataset, with a large swath width (185-km) in precisely estimating AGB. This strength of the Landsat OLI dataset is crucial especially in sub-Saharan Africa where high-resolution remote sensing data availability remains a challenge.
NASA Technical Reports Server (NTRS)
Pagnutti, Mary
2006-01-01
This viewgraph presentation reviews the creation of a prototype algorithm for atmospheric correction using high spatial resolution earth observing imaging systems. The objective of the work was to evaluate accuracy of a prototype algorithm that uses satellite-derived atmospheric products to generate scene reflectance maps for high spatial resolution (HSR) systems. This presentation focused on preliminary results of only the satellite-based atmospheric correction algorithm.
Learning Low-Rank Decomposition for Pan-Sharpening With Spatial-Spectral Offsets.
Yang, Shuyuan; Zhang, Kai; Wang, Min
2017-08-25
Finding accurate injection components is the key issue in pan-sharpening methods. In this paper, a low-rank pan-sharpening (LRP) model is developed from a new perspective of offset learning. Two offsets are defined to represent the spatial and spectral differences between low-resolution multispectral and high-resolution multispectral (HRMS) images, respectively. In order to reduce spatial and spectral distortions, spatial equalization and spectral proportion constraints are designed and cast on the offsets, to develop a spatial and spectral constrained stable low-rank decomposition algorithm via augmented Lagrange multiplier. By fine modeling and heuristic learning, our method can simultaneously reduce spatial and spectral distortions in the fused HRMS images. Moreover, our method can efficiently deal with noises and outliers in source images, for exploring low-rank and sparse characteristics of data. Extensive experiments are taken on several image data sets, and the results demonstrate the efficiency of the proposed LRP.
Image sharpening for mixed spatial and spectral resolution satellite systems
NASA Technical Reports Server (NTRS)
Hallada, W. A.; Cox, S.
1983-01-01
Two methods of image sharpening (reconstruction) are compared. The first, a spatial filtering technique, extrapolates edge information from a high spatial resolution panchromatic band at 10 meters and adds it to the low spatial resolution narrow spectral bands. The second method, a color normalizing technique, is based on the ability to separate image hue and brightness components in spectral data. Using both techniques, multispectral images are sharpened from 30, 50, 70, and 90 meter resolutions. Error rates are calculated for the two methods and all sharpened resolutions. The results indicate that the color normalizing method is superior to the spatial filtering technique.
Mapping lipid and collagen by multispectral photoacoustic imaging of chemical bond vibration
NASA Astrophysics Data System (ADS)
Wang, Pu; Wang, Ping; Wang, Han-Wei; Cheng, Ji-Xin
2012-09-01
Photoacoustic microscopy using vibrational overtone absorption as a contrast mechanism allows bond-selective imaging of deep tissues. Due to the spectral similarity of molecules in the region of overtone vibration, it is difficult to interrogate chemical components using photoacoustic signal at single excitation wavelength. Here we demonstrate that lipids and collagen, two critical markers for many kinds of diseases, can be distinguished by multispectral photoacoustic imaging of the first overtone of C-H bond. A phantom consisting of rat-tail tendon and fat was constructed to demonstrate this technique. Wavelengths between 1650 and 1850 nm were scanned to excite both the first overtone and combination bands of C-H bonds. B-scan multispectral photoacoustic images, in which each pixel contains a spectrum, were analyzed by a multivariate curve resolution-alternating least squares algorithm to recover the spatial distribution of collagen and lipids in the phantom.
Mapping lipid and collagen by multispectral photoacoustic imaging of chemical bond vibration.
Wang, Pu; Wang, Ping; Wang, Han-Wei; Cheng, Ji-Xin
2012-09-01
Photoacoustic microscopy using vibrational overtone absorption as a contrast mechanism allows bond-selective imaging of deep tissues. Due to the spectral similarity of molecules in the region of overtone vibration, it is difficult to interrogate chemical components using photoacoustic signal at single excitation wavelength. Here we demonstrate that lipids and collagen, two critical markers for many kinds of diseases, can be distinguished by multispectral photoacoustic imaging of the first overtone of C-H bond. A phantom consisting of rat-tail tendon and fat was constructed to demonstrate this technique. Wavelengths between 1650 and 1850 nm were scanned to excite both the first overtone and combination bands of C-H bonds. B-scan multispectral photoacoustic images, in which each pixel contains a spectrum, were analyzed by a multivariate curve resolution-alternating least squares algorithm to recover the spatial distribution of collagen and lipids in the phantom.
NASA Technical Reports Server (NTRS)
Settle, M.; Adams, J.
1982-01-01
Improved orbital imaging capabilities from the standpoint of different scientific disciplines, such as geology, botany, hydrology, and geography were evaluated. A discussion on how geologists might exploit the anticipated measurement capabilities of future orbital imaging systems to discriminate and characterize different types of geologic materials exposed at the Earth's surface is presented. Principle objectives are to summarize past accomplishments in the use of multispectral imaging techniques for lithologic mapping; to identify critical gaps in earlier research efforts that currently limit the ability to extract useful information about the physical and chemical characteristics of geological materials from orbital multispectral surveys; and to define major thresholds, resolution and sensitivity within the visible and infrared portions of the electromagnetic spectrum which, if achieved would result in significant improvement in our ability to discriminate and characterize different geological materials exposed at the Earth's surface.
Polarimetric Multispectral Imaging Technology
NASA Technical Reports Server (NTRS)
Cheng, L.-J.; Chao, T.-H.; Dowdy, M.; Mahoney, C.; Reyes, G.
1993-01-01
The Jet Propulsion Laboratory is developing a remote sensing technology on which a new generation of compact, lightweight, high-resolution, low-power, reliable, versatile, programmable scientific polarimetric multispectral imaging instruments can be built to meet the challenge of future planetary exploration missions. The instrument is based on the fast programmable acousto-optic tunable filter (AOTF) of tellurium dioxide (TeO2) that operates in the wavelength range of 0.4-5 microns. Basically, the AOTF multispectral imaging instrument measures incoming light intensity as a function of spatial coordinates, wavelength, and polarization. Its operation can be in either sequential, random access, or multiwavelength mode as required. This provides observation flexibility, allowing real-time alternation among desired observations, collecting needed data only, minimizing data transmission, and permitting implementation of new experiments. These will result in optimization of the mission performance with minimal resources. Recently we completed a polarimetric multispectral imaging prototype instrument and performed outdoor field experiments for evaluating application potentials of the technology. We also investigated potential improvements on AOTF performance to strengthen technology readiness for applications. This paper will give a status report on the technology and a prospect toward future planetary exploration.
NASA Astrophysics Data System (ADS)
Yeom, J. M.
2017-12-01
Recently developed Korea Multi-Purpose Satellite-3A (KOMPSAT-3A), which is a continuation of the KOMPSAT-1, 2 and 3 earth observation satellite (EOS) programs from the Korea Aerospace Research Institute (KARI) was launched on March, 25 2015 on a Dnepr-1 launch vehicle from the Jasny Dombarovsky site in Russia. After launched, KARI performed in-orbit-test (IOT) including radiometric calibration for 6 months from 14 Apr. to 4 Sep. 2015. KOMPSAT-3A is equipped with two distinctive sensors; one is a high resolution multispectral optical sensor, namely the Advances Earth Image Sensor System-A (AEISS-A) and the other is the Scanner Infrared Imaging System (SIIS). In this study, we focused on the radiometric calibration of AEISS-A. The multispectral wavelengths of AEISS-A are covering three visible regions: blue (450 - 520 nm), green (520 - 600 nm), red (630 - 690 nm), one near infrared (760 - 900 nm) with a 2.0 m spatial resolution at nadir, whereas the panchromatic imagery (450 - 900 nm) has a 0.5 m resolution. Those are the same spectral response functions were same with KOMPSAT-3 multispectral and panchromatic bands but the spatial resolutions are improved. The main mission of KOMPSAT-3A is to develop for Geographical Information System (GIS) applications in environmental, agriculture, and oceanographic sciences, as well as natural hazard monitoring.
Retinex Preprocessing for Improved Multi-Spectral Image Classification
NASA Technical Reports Server (NTRS)
Thompson, B.; Rahman, Z.; Park, S.
2000-01-01
The goal of multi-image classification is to identify and label "similar regions" within a scene. The ability to correctly classify a remotely sensed multi-image of a scene is affected by the ability of the classification process to adequately compensate for the effects of atmospheric variations and sensor anomalies. Better classification may be obtained if the multi-image is preprocessed before classification, so as to reduce the adverse effects of image formation. In this paper, we discuss the overall impact on multi-spectral image classification when the retinex image enhancement algorithm is used to preprocess multi-spectral images. The retinex is a multi-purpose image enhancement algorithm that performs dynamic range compression, reduces the dependence on lighting conditions, and generally enhances apparent spatial resolution. The retinex has been successfully applied to the enhancement of many different types of grayscale and color images. We show in this paper that retinex preprocessing improves the spatial structure of multi-spectral images and thus provides better within-class variations than would otherwise be obtained without the preprocessing. For a series of multi-spectral images obtained with diffuse and direct lighting, we show that without retinex preprocessing the class spectral signatures vary substantially with the lighting conditions. Whereas multi-dimensional clustering without preprocessing produced one-class homogeneous regions, the classification on the preprocessed images produced multi-class non-homogeneous regions. This lack of homogeneity is explained by the interaction between different agronomic treatments applied to the regions: the preprocessed images are closer to ground truth. The principle advantage that the retinex offers is that for different lighting conditions classifications derived from the retinex preprocessed images look remarkably "similar", and thus more consistent, whereas classifications derived from the original images, without preprocessing, are much less similar.
NASA Astrophysics Data System (ADS)
Sahoo, Sujit Kumar; Tang, Dongliang; Dang, Cuong
2018-02-01
Large field of view multispectral imaging through scattering medium is a fundamental quest in optics community. It has gained special attention from researchers in recent years for its wide range of potential applications. However, the main bottlenecks of the current imaging systems are the requirements on specific illumination, poor image quality and limited field of view. In this work, we demonstrated a single-shot high-resolution colour-imaging through scattering media using a monochromatic camera. This novel imaging technique is enabled by the spatial, spectral decorrelation property and the optical memory effect of the scattering media. Moreover the use of deconvolution image processing further annihilate above-mentioned drawbacks arise due iterative refocusing, scanning or phase retrieval procedures.
Utilization of LANDSAT images in cartography
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Alburquerque, P. C. G.
1981-01-01
The use of multispectral imagery obtained from LANDSAT for mapping purposes is discussed with emphasis on geometric rectification, image resolution, and systematic topographic mapping. A method is given for constructing 1:250,000 scale maps. The limitations for satellite cartography are examined.
Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.
2010-01-01
Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475
Coastal modification of a scene employing multispectral images and vector operators.
Lira, Jorge
2017-05-01
Changes in sea level, wind patterns, sea current patterns, and tide patterns have produced morphologic transformations in the coastline area of Tamaulipas Sate in North East Mexico. Such changes generated a modification of the coastline and variations of the texture-relief and texture of the continental area of Tamaulipas. Two high-resolution multispectral satellite Satellites Pour l'Observation de la Terre images were employed to quantify the morphologic change of such continental area. The images cover a time span close to 10 years. A variant of the principal component analysis was used to delineate the modification of the land-water line. To quantify changes in texture-relief and texture, principal component analysis was applied to the multispectral images. The first principal components of each image were modeled as a discrete bidimensional vector field. The divergence and Laplacian vector operators were applied to the discrete vector field. The divergence provided the change of texture, while the Laplacian produced the change of texture-relief in the area of study.
Whole-body and multispectral photoacoustic imaging of adult zebrafish
NASA Astrophysics Data System (ADS)
Huang, Na; Xi, Lei
2016-10-01
Zebrafish is a top vertebrate model to study developmental biology and genetics, and it is becoming increasingly popular for studying human diseases due to its high genome similarity to that of humans and the optical transparency in embryonic stages. However, it becomes difficult for pure optical imaging techniques to volumetric visualize the internal organs and structures of wild-type zebrafish in juvenile and adult stages with excellent resolution and penetration depth. Even with the establishment of mutant lines which remain transparent over the life cycle, it is still a challenge for pure optical imaging modalities to image the whole body of adult zebrafish with micro-scale resolution. However, the method called photoacoustic imaging that combines all the advantages of the optical imaging and ultrasonic imaging provides a new way to image the whole body of the zebrafish. In this work, we developed a non-invasive photoacoustic imaging system with optimized near-infrared illumination and cylindrical scanning to image the zebrafish. The lateral and axial resolution yield to 80 μm and 600 μm, respectively. Multispectral strategy with wavelengths from 690 nm to 930 nm was employed to image various organs inside the zebrafish. From the reconstructed images, most major organs and structures inside the body can be precisely imaged. Quantitative and statistical analysis of absorption for organs under illumination with different wavelengths were carried out.
Zhao, Yong-guang; Ma, Ling-ling; Li, Chuan-rong; Zhu, Xiao-hua; Tang, Ling-li
2015-07-01
Due to the lack of enough spectral bands for multi-spectral sensor, it is difficult to reconstruct surface retlectance spectrum from finite spectral information acquired by multi-spectral instrument. Here, taking into full account of the heterogeneity of pixel from remote sensing image, a method is proposed to simulate hyperspectral data from multispectral data based on canopy radiation transfer model. This method first assumes the mixed pixels contain two types of land cover, i.e., vegetation and soil. The sensitive parameters of Soil-Leaf-Canopy (SLC) model and a soil ratio factor were retrieved from multi-spectral data based on Look-Up Table (LUT) technology. Then, by combined with a soil ratio factor, all the parameters were input into the SLC model to simulate the surface reflectance spectrum from 400 to 2 400 nm. Taking Landsat Enhanced Thematic Mapper Plus (ETM+) image as reference image, the surface reflectance spectrum was simulated. The simulated reflectance spectrum revealed different feature information of different surface types. To test the performance of this method, the simulated reflectance spectrum was convolved with the Landsat ETM + spectral response curves and Moderate Resolution Imaging Spectrometer (MODIS) spectral response curves to obtain the simulated Landsat ETM+ and MODIS image. Finally, the simulated Landsat ETM+ and MODIS images were compared with the observed Landsat ETM+ and MODIS images. The results generally showed high correction coefficients (Landsat: 0.90-0.99, MODIS: 0.74-0.85) between most simulated bands and observed bands and indicated that the simulated reflectance spectrum was well simulated and reliable.
LANDSAT-4 Scientific Characterization: Early Results Symposium
NASA Technical Reports Server (NTRS)
1983-01-01
Radiometric calibration, geometric accuracy, spatial and spectral resolution, and image quality are examined for the thematic mapper and the multispectral band scanner on LANDSAT 4. Sensor performance is evaluated.
Multi-Image Registration for an Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn
2002-01-01
An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-03-02
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-01-01
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703
Techniques for automatic large scale change analysis of temporal multispectral imagery
NASA Astrophysics Data System (ADS)
Mercovich, Ryan A.
Change detection in remotely sensed imagery is a multi-faceted problem with a wide variety of desired solutions. Automatic change detection and analysis to assist in the coverage of large areas at high resolution is a popular area of research in the remote sensing community. Beyond basic change detection, the analysis of change is essential to provide results that positively impact an image analyst's job when examining potentially changed areas. Present change detection algorithms are geared toward low resolution imagery, and require analyst input to provide anything more than a simple pixel level map of the magnitude of change that has occurred. One major problem with this approach is that change occurs in such large volume at small spatial scales that a simple change map is no longer useful. This research strives to create an algorithm based on a set of metrics that performs a large area search for change in high resolution multispectral image sequences and utilizes a variety of methods to identify different types of change. Rather than simply mapping the magnitude of any change in the scene, the goal of this research is to create a useful display of the different types of change in the image. The techniques presented in this dissertation are used to interpret large area images and provide useful information to an analyst about small regions that have undergone specific types of change while retaining image context to make further manual interpretation easier. This analyst cueing to reduce information overload in a large area search environment will have an impact in the areas of disaster recovery, search and rescue situations, and land use surveys among others. By utilizing a feature based approach founded on applying existing statistical methods and new and existing topological methods to high resolution temporal multispectral imagery, a novel change detection methodology is produced that can automatically provide useful information about the change occurring in large area and high resolution image sequences. The change detection and analysis algorithm developed could be adapted to many potential image change scenarios to perform automatic large scale analysis of change.
Wide field-of-view dual-band multispectral muzzle flash detection
NASA Astrophysics Data System (ADS)
Montoya, J.; Melchor, J.; Spiliotis, P.; Taplin, L.
2013-06-01
Sensor technologies are undergoing revolutionary advances, as seen in the rapid growth of multispectral methodologies. Increases in spatial, spectral, and temporal resolution, and in breadth of spectral coverage, render feasible sensors that function with unprecedented performance. A system was developed that addresses many of the key hardware requirements for a practical dual-band multispectral acquisition system, including wide field of view and spectral/temporal shift between dual bands. The system was designed using a novel dichroic beam splitter and dual band-pass filter configuration that creates two side-by-side images of a scene on a single sensor. A high-speed CMOS sensor was used to simultaneously capture data from the entire scene in both spectral bands using a short focal-length lens that provided a wide field-of-view. The beam-splitter components were arranged such that the two images were maintained in optical alignment and real-time intra-band processing could be carried out using only simple arithmetic on the image halves. An experiment related to limitations of the system to address multispectral detection requirements was performed. This characterized the system's low spectral variation across its wide field of view. This paper provides lessons learned on the general limitation of key hardware components required for multispectral muzzle flash detection, using the system as a hardware example combined with simulated multispectral muzzle flash and background signatures.
Design and development of an airborne multispectral imaging system
NASA Astrophysics Data System (ADS)
Kulkarni, Rahul R.; Bachnak, Rafic; Lyle, Stacey; Steidley, Carl W.
2002-08-01
Advances in imaging technology and sensors have made airborne remote sensing systems viable for many applications that require reasonably good resolution at low cost. Digital cameras are making their mark on the market by providing high resolution at very high rates. This paper describes an aircraft-mounted imaging system (AMIS) that is being designed and developed at Texas A&M University-Corpus Christi (A&M-CC) with the support of a grant from NASA. The approach is to first develop and test a one-camera system that will be upgraded into a five-camera system that offers multi-spectral capabilities. AMIS will be low cost, rugged, portable and has its own battery power source. Its immediate use will be to acquire images of the Coastal area in the Gulf of Mexico for a variety of studies covering vast spectra from near ultraviolet region to near infrared region. This paper describes AMIS and its characteristics, discusses the process for selecting the major components, and presents the progress.
Comparing Individual Tree Segmentation Based on High Resolution Multispectral Image and Lidar Data
NASA Astrophysics Data System (ADS)
Xiao, P.; Kelly, M.; Guo, Q.
2014-12-01
This study compares the use of high-resolution multispectral WorldView images and high density Lidar data for individual tree segmentation. The application focuses on coniferous and deciduous forests in the Sierra Nevada Mountains. The tree objects are obtained in two ways: a hybrid region-merging segmentation method with multispectral images, and a top-down and bottom-up region-growing method with Lidar data. The hybrid region-merging method is used to segment individual tree from multispectral images. It integrates the advantages of global-oriented and local-oriented region-merging strategies into a unified framework. The globally most-similar pair of regions is used to determine the starting point of a growing region. The merging iterations are constrained within the local vicinity, thus the segmentation is accelerated and can reflect the local context. The top-down region-growing method is adopted in coniferous forest to delineate individual tree from Lidar data. It exploits the spacing between the tops of trees to identify and group points into a single tree based on simple rules of proximity and likely tree shape. The bottom-up region-growing method based on the intensity and 3D structure of Lidar data is applied in deciduous forest. It segments tree trunks based on the intensity and topological relationships of the points, and then allocate other points to exact tree crowns according to distance. The accuracies for each method are evaluated with field survey data in several test sites, covering dense and sparse canopy. Three types of segmentation results are produced: true positive represents a correctly segmented individual tree, false negative represents a tree that is not detected and assigned to a nearby tree, and false positive represents that a point or pixel cluster is segmented as a tree that does not in fact exist. They respectively represent correct-, under-, and over-segmentation. Three types of index are compared for segmenting individual tree from multispectral image and Lidar data: recall, precision and F-score. This work explores the tradeoff between the expensive Lidar data and inexpensive multispectral image. The conclusion will guide the optimal data selection in different density canopy areas for individual tree segmentation, and contribute to the field of forest remote sensing.
Area-to-point regression kriging for pan-sharpening
NASA Astrophysics Data System (ADS)
Wang, Qunming; Shi, Wenzhong; Atkinson, Peter M.
2016-04-01
Pan-sharpening is a technique to combine the fine spatial resolution panchromatic (PAN) band with the coarse spatial resolution multispectral bands of the same satellite to create a fine spatial resolution multispectral image. In this paper, area-to-point regression kriging (ATPRK) is proposed for pan-sharpening. ATPRK considers the PAN band as the covariate. Moreover, ATPRK is extended with a local approach, called adaptive ATPRK (AATPRK), which fits a regression model using a local, non-stationary scheme such that the regression coefficients change across the image. The two geostatistical approaches, ATPRK and AATPRK, were compared to the 13 state-of-the-art pan-sharpening approaches summarized in Vivone et al. (2015) in experiments on three separate datasets. ATPRK and AATPRK produced more accurate pan-sharpened images than the 13 benchmark algorithms in all three experiments. Unlike the benchmark algorithms, the two geostatistical solutions precisely preserved the spectral properties of the original coarse data. Furthermore, ATPRK can be enhanced by a local scheme in AATRPK, in cases where the residuals from a global regression model are such that their spatial character varies locally.
CMOS Time-Resolved, Contact, and Multispectral Fluorescence Imaging for DNA Molecular Diagnostics
Guo, Nan; Cheung, Ka Wai; Wong, Hiu Tung; Ho, Derek
2014-01-01
Instrumental limitations such as bulkiness and high cost prevent the fluorescence technique from becoming ubiquitous for point-of-care deoxyribonucleic acid (DNA) detection and other in-field molecular diagnostics applications. The complimentary metal-oxide-semiconductor (CMOS) technology, as benefited from process scaling, provides several advanced capabilities such as high integration density, high-resolution signal processing, and low power consumption, enabling sensitive, integrated, and low-cost fluorescence analytical platforms. In this paper, CMOS time-resolved, contact, and multispectral imaging are reviewed. Recently reported CMOS fluorescence analysis microsystem prototypes are surveyed to highlight the present state of the art. PMID:25365460
NASA Astrophysics Data System (ADS)
Shinde, Anant; Perinchery, Sandeep Menon; Murukeshan, Vadakke Matham
2017-04-01
An optical imaging probe with targeted multispectral and spatiotemporal illumination features has applications in many diagnostic biomedical studies. However, these systems are mostly adapted in conventional microscopes, limiting their use for in vitro applications. We present a variable resolution imaging probe using a digital micromirror device (DMD) with an achievable maximum lateral resolution of 2.7 μm and an axial resolution of 5.5 μm, along with precise shape selective targeted illumination ability. We have demonstrated switching of different wavelengths to image multiple regions in the field of view. Moreover, the targeted illumination feature allows enhanced image contrast by time averaged imaging of selected regions with different optical exposure. The region specific multidirectional scanning feature of this probe has facilitated high speed targeted confocal imaging.
NASA Technical Reports Server (NTRS)
Allen, Carlton; Jakes, Petr; Jaumann, Ralf; Marshall, John; Moses, Stewart; Ryder, Graham; Saunders, Stephen; Singer, Robert
1996-01-01
The field geology/process group examined the basic operations of a terrestrial field geologist and the manner in which these operations could be transferred to a planetary lander. Four basic requirements for robotic field geology were determined: geologic content; surface vision; mobility; and manipulation. Geologic content requires a combination of orbital and descent imaging. Surface vision requirements include range, resolution, stereo, and multispectral imaging. The minimum mobility for useful field geology depends on the scale of orbital imagery. Manipulation requirements include exposing unweathered surfaces, screening samples, and bringing samples in contact with analytical instruments. To support these requirements, several advanced capabilities for future development are recommended. Capabilities include near-infrared reflectance spectroscopy, hyper-spectral imaging, multispectral microscopy, artificial intelligence in support of imaging, x ray diffraction, x ray fluorescence, and rock chipping.
The development of a specialized processor for a space-based multispectral earth imager
NASA Astrophysics Data System (ADS)
Khedr, Mostafa E.
2008-10-01
This work was done in the Department of Computer Engineering, Lvov Polytechnic National University, Lvov, Ukraine, as a thesis entitled "Space Imager Computer System for Raw Video Data Processing" [1]. This work describes the synthesis and practical implementation of a specialized computer system for raw data control and processing onboard a satellite MultiSpectral earth imager. This computer system is intended for satellites with resolution in the range of one meter with 12-bit precession. The design is based mostly on general off-the-shelf components such as (FPGAs) plus custom designed software for interfacing with PC and test equipment. The designed system was successfully manufactured and now fully functioning in orbit.
NASA Technical Reports Server (NTRS)
1982-01-01
Evaluating of the combined utility of narrowband and multispectral imaging in both the infrared and visible for the lithologic identification of geologic materials, and of the combined utility of multispectral imaging in the visible and infrared for lithologic mapping on a global bases are near term recommendations for future imaging capabilities. Long term recommendations include laboratory research into methods of field sampling and theoretical models of microscale mixing. The utility of improved spatial and spectral resolutions and radiometric sensitivity is also suggested for the long term. Geobotanical remote sensing research should be conducted to (1) separate geological and botanical spectral signatures in individual picture elements; (2) study geobotanical correlations that more fully simulate natural conditions; and use test sites designed to test specific geobotanical hypotheses.
Monitoring algal blooms in drinking water reservoirs using the Landsat-8 Operational Land Imager
In this study, we demonstrated that the Landsat-8 Operational Land Imager (OLI) sensor is a powerful tool that can provide periodic and system-wide information on the condition of drinking water reservoirs. The OLI is a multispectral radiometer (30 m spatial resolution) that allo...
2015-10-15
This high-resolution image captured by NASA's New Horizons spacecraft combines blue, red and infrared images taken by the Ralph/Multispectral Visual Imaging Camera (MVIC). The bright expanse is the western lobe of the "heart," informally called Sputnik Planum, which has been found to be rich in nitrogen, carbon monoxide and methane ices. http://photojournal.jpl.nasa.gov/catalog/PIA20007
Tamouridou, Afroditi A.; Lagopodi, Anastasia L.; Kashefi, Javid; Kasampalis, Dimitris; Kontouris, Georgios; Moshou, Dimitrios
2017-01-01
Remote sensing techniques are routinely used in plant species discrimination and of weed mapping. In the presented work, successful Silybum marianum detection and mapping using multilayer neural networks is demonstrated. A multispectral camera (green-red-near infrared) attached on a fixed wing unmanned aerial vehicle (UAV) was utilized for the acquisition of high-resolution images (0.1 m resolution). The Multilayer Perceptron with Automatic Relevance Determination (MLP-ARD) was used to identify the S. marianum among other vegetation, mostly Avena sterilis L. The three spectral bands of Red, Green, Near Infrared (NIR) and the texture layer resulting from local variance were used as input. The S. marianum identification rates using MLP-ARD reached an accuracy of 99.54%. Τhe study had an one year duration, meaning that the results are specific, although the accuracy shows the interesting potential of S. marianum mapping with MLP-ARD on multispectral UAV imagery. PMID:29019957
Tamouridou, Afroditi A; Alexandridis, Thomas K; Pantazi, Xanthoula E; Lagopodi, Anastasia L; Kashefi, Javid; Kasampalis, Dimitris; Kontouris, Georgios; Moshou, Dimitrios
2017-10-11
Remote sensing techniques are routinely used in plant species discrimination and of weed mapping. In the presented work, successful Silybum marianum detection and mapping using multilayer neural networks is demonstrated. A multispectral camera (green-red-near infrared) attached on a fixed wing unmanned aerial vehicle (UAV) was utilized for the acquisition of high-resolution images (0.1 m resolution). The Multilayer Perceptron with Automatic Relevance Determination (MLP-ARD) was used to identify the S. marianum among other vegetation, mostly Avena sterilis L. The three spectral bands of Red, Green, Near Infrared (NIR) and the texture layer resulting from local variance were used as input. The S. marianum identification rates using MLP-ARD reached an accuracy of 99.54%. Τhe study had an one year duration, meaning that the results are specific, although the accuracy shows the interesting potential of S. marianum mapping with MLP-ARD on multispectral UAV imagery.
Multispectral imaging of the ocular fundus using light emitting diode illumination
NASA Astrophysics Data System (ADS)
Everdell, N. L.; Styles, I. B.; Calcagni, A.; Gibson, J.; Hebden, J.; Claridge, E.
2010-09-01
We present an imaging system based on light emitting diode (LED) illumination that produces multispectral optical images of the human ocular fundus. It uses a conventional fundus camera equipped with a high power LED light source and a highly sensitive electron-multiplying charge coupled device camera. It is able to take pictures at a series of wavelengths in rapid succession at short exposure times, thereby eliminating the image shift introduced by natural eye movements (saccades). In contrast with snapshot systems the images retain full spatial resolution. The system is not suitable for applications where the full spectral resolution is required as it uses discrete wavebands for illumination. This is not a problem in retinal imaging where the use of selected wavelengths is common. The modular nature of the light source allows new wavelengths to be introduced easily and at low cost. The use of wavelength-specific LEDs as a source is preferable to white light illumination and subsequent filtering of the remitted light as it minimizes the total light exposure of the subject. The system is controlled via a graphical user interface that enables flexible control of intensity, duration, and sequencing of sources in synchrony with the camera. Our initial experiments indicate that the system can acquire multispectral image sequences of the human retina at exposure times of 0.05 s in the range of 500-620 nm with mean signal to noise ratio of 17 dB (min 11, std 4.5), making it suitable for quantitative analysis with application to the diagnosis and screening of eye diseases such as diabetic retinopathy and age-related macular degeneration.
Multispectral imaging of the ocular fundus using light emitting diode illumination.
Everdell, N L; Styles, I B; Calcagni, A; Gibson, J; Hebden, J; Claridge, E
2010-09-01
We present an imaging system based on light emitting diode (LED) illumination that produces multispectral optical images of the human ocular fundus. It uses a conventional fundus camera equipped with a high power LED light source and a highly sensitive electron-multiplying charge coupled device camera. It is able to take pictures at a series of wavelengths in rapid succession at short exposure times, thereby eliminating the image shift introduced by natural eye movements (saccades). In contrast with snapshot systems the images retain full spatial resolution. The system is not suitable for applications where the full spectral resolution is required as it uses discrete wavebands for illumination. This is not a problem in retinal imaging where the use of selected wavelengths is common. The modular nature of the light source allows new wavelengths to be introduced easily and at low cost. The use of wavelength-specific LEDs as a source is preferable to white light illumination and subsequent filtering of the remitted light as it minimizes the total light exposure of the subject. The system is controlled via a graphical user interface that enables flexible control of intensity, duration, and sequencing of sources in synchrony with the camera. Our initial experiments indicate that the system can acquire multispectral image sequences of the human retina at exposure times of 0.05 s in the range of 500-620 nm with mean signal to noise ratio of 17 dB (min 11, std 4.5), making it suitable for quantitative analysis with application to the diagnosis and screening of eye diseases such as diabetic retinopathy and age-related macular degeneration.
Alexandridis, Thomas K; Tamouridou, Afroditi Alexandra; Pantazi, Xanthoula Eirini; Lagopodi, Anastasia L; Kashefi, Javid; Ovakoglou, Georgios; Polychronos, Vassilios; Moshou, Dimitrios
2017-09-01
In the present study, the detection and mapping of Silybum marianum (L.) Gaertn. weed using novelty detection classifiers is reported. A multispectral camera (green-red-NIR) on board a fixed wing unmanned aerial vehicle (UAV) was employed for obtaining high-resolution images. Four novelty detection classifiers were used to identify S. marianum between other vegetation in a field. The classifiers were One Class Support Vector Machine (OC-SVM), One Class Self-Organizing Maps (OC-SOM), Autoencoders and One Class Principal Component Analysis (OC-PCA). As input features to the novelty detection classifiers, the three spectral bands and texture were used. The S. marianum identification accuracy using OC-SVM reached an overall accuracy of 96%. The results show the feasibility of effective S. marianum mapping by means of novelty detection classifiers acting on multispectral UAV imagery.
Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E; Moran, Emilio
2008-01-01
Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin.
Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E.; Moran, Emilio
2009-01-01
Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin. PMID:19789716
Black, Robert W.; Haggland, Alan; Crosby, Greg
2003-01-01
Instream hydraulic and riparian habitat conditions and stream temperatures were characterized for selected stream segments in the Upper White River Basin, Washington. An aerial multispectral imaging system used digital cameras to photograph the stream segments across multiple wavelengths to characterize fish habitat and temperature conditions. All imageries were georeferenced. Fish habitat features were photographed at a resolution of 0.5 meter and temperature imageries were photographed at a 1.0-meter resolution. The digital multispectral imageries were classified using commercially available software. Aerial photographs were taken on September 21, 1999. Field habitat data were collected from August 23 to October 12, 1999, to evaluate the measurement accuracy and effectiveness of the multispectral imaging in determining the extent of the instream habitat variables. Fish habitat types assessed by this method were the abundance of instream hydraulic features such as pool and riffle habitats, turbulent and non-turbulent habitats, riparian composition, the abundance of large woody debris in the stream and riparian zone, and stream temperatures. Factors such as the abundance of instream woody debris, the location and frequency of pools, and stream temperatures generally are known to have a significant impact on salmon. Instream woody debris creates the habitat complexity necessary to maintain a diverse and healthy salmon population. The abundance of pools is indicative of a stream's ability to support fish and other aquatic organisms. Changes in water temperature can affect aquatic organisms by altering metabolic rates and oxygen requirements, altering their sensitivity to toxic materials and affecting their ability to avoid predators. The specific objectives of this project were to evaluate the use of an aerial multispectral imaging system to accurately identify instream hydraulic features and surface-water temperatures in the Upper White River Basin, to use the multispectral system to help establish baseline instream/riparian habitat conditions in the study area, and to qualitatively assess the imaging system for possible use in other Puget Sound rivers. For the most part, all multispectral imagery-based estimates of total instream riffle and pool area were less than field measurements. The imagery-based estimates for riffle habitat area ranged from 35.5 to 83.3 percent less than field measurements. Pool habitat estimates ranged from 139.3 percent greater than field measurements to 94.0 percent less than field measurements. Multispectral imagery-based estimates of turbulent habitat conditions ranged from 9.3 percent greater than field measurements to 81.6 percent less than field measurements. Multispectral imagery-based estimates of non-turbulent habitat conditions ranged from 27.7 to 74.1 percent less than field measurements. The absolute average percentage of difference between field and imagery-based habitat type areas was less for the turbulent and non-turbulent habitat type categories than for pools and riffles. The estimate of woody debris by multispectral imaging was substantially different than field measurements; percentage of differences ranged from +373.1 to -100 percent. Although the total area of riffles, pools, and turbulent and non-turbulent habitat types measured in the field were all substantially higher than those estimated from the multispectral imagery, the percentage of composition of each habitat type was not substantially different between the imagery-based estimates and field measurements.
Selected configuration tradeoffs of contour optical instruments
NASA Astrophysics Data System (ADS)
Warren, J.; Strohbehn, K.; Murchie, S.; Fort, D.; Reynolds, E.; Heyler, G.; Peacock, K.; Boldt, J.; Darlington, E.; Hayes, J.; Henshaw, R.; Izenberg, N.; Kardian, C.; Lees, J.; Lohr, D.; Mehoke, D.; Schaefer, E.; Sholar, T.; Spisz, T.; Willey, C.; Veverka, J.; Bell, J.; Cochran, A.
2003-01-01
The Comet Nucleus Tour (CONTOUR) is a low-cost NASA Discovery mission designed to conduct three close flybys of comet nuclei. Selected configuration tradeoffs conducted to balance science requirements with low mission cost are reviewed. The tradeoffs discussed focus on the optical instruments and related spacecraft considerations. Two instruments are under development. The CONTOUR Forward Imager (CFI) is designed to perform optical navigation, moderate resolution nucleus/jet imaging, and imaging of faint molecular emission bands in the coma. The CONTOUR Remote Imager and Spectrometer (CRISP) is designed to obtain high-resolution multispectral images of the nucleus, conduct spectral mapping of the nucleus surface, and provide a backup optical navigation capability. Tradeoffs discussed are: (1) the impact on the optical instruments of not using reaction wheels on the spacecraft, (2) the improved performance and simplification gained by implementing a dedicated star tracker instead of including this function in CFI, (3) the improved flexibility and robustness of switching to a low frame rate tracker for CRISP, (4) the improved performance and simplification of replacing a visible imaging spectrometer by enhanced multispectral imaging in CRISP, and (5) the impact on spacecraft resources of these and other tradeoffs.
Cogongrass inventory and management.
DOT National Transportation Integrated Search
2007-08-01
A field study was conducted from 2005-2006 to test broad scale classification of cogongrass (Imperata cylindrica (L.) Beauv.) on Mississippi highway rights of ways with aerial imagery. Four mosaics of high resolution multispectral images of median an...
Multipurpose Hyperspectral Imaging System
NASA Technical Reports Server (NTRS)
Mao, Chengye; Smith, David; Lanoue, Mark A.; Poole, Gavin H.; Heitschmidt, Jerry; Martinez, Luis; Windham, William A.; Lawrence, Kurt C.; Park, Bosoon
2005-01-01
A hyperspectral imaging system of high spectral and spatial resolution that incorporates several innovative features has been developed to incorporate a focal plane scanner (U.S. Patent 6,166,373). This feature enables the system to be used for both airborne/spaceborne and laboratory hyperspectral imaging with or without relative movement of the imaging system, and it can be used to scan a target of any size as long as the target can be imaged at the focal plane; for example, automated inspection of food items and identification of single-celled organisms. The spectral resolution of this system is greater than that of prior terrestrial multispectral imaging systems. Moreover, unlike prior high-spectral resolution airborne and spaceborne hyperspectral imaging systems, this system does not rely on relative movement of the target and the imaging system to sweep an imaging line across a scene. This compact system (see figure) consists of a front objective mounted at a translation stage with a motorized actuator, and a line-slit imaging spectrograph mounted within a rotary assembly with a rear adaptor to a charged-coupled-device (CCD) camera. Push-broom scanning is carried out by the motorized actuator which can be controlled either manually by an operator or automatically by a computer to drive the line-slit across an image at a focal plane of the front objective. To reduce the cost, the system has been designed to integrate as many as possible off-the-shelf components including the CCD camera and spectrograph. The system has achieved high spectral and spatial resolutions by using a high-quality CCD camera, spectrograph, and front objective lens. Fixtures for attachment of the system to a microscope (U.S. Patent 6,495,818 B1) make it possible to acquire multispectral images of single cells and other microscopic objects.
Linear mixing model applied to coarse spatial resolution data from multispectral satellite sensors
NASA Technical Reports Server (NTRS)
Holben, Brent N.; Shimabukuro, Yosio E.
1993-01-01
A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55-3.95 micron channel was used with the two reflective channels 0.58-0.68 micron and 0.725-1.1 micron to run a constrained least squares model to generate fraction images for an area in the west central region of Brazil. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse spatial resolution data for global studies.
A fast and automatic mosaic method for high-resolution satellite images
NASA Astrophysics Data System (ADS)
Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing
2015-12-01
We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.
Giant Raman scattering from J-aggregated dyes inside carbon nanotubes for multispectral imaging
NASA Astrophysics Data System (ADS)
Gaufrès, E.; Tang, N. Y.-Wa; Lapointe, F.; Cabana, J.; Nadon, M.-A.; Cottenye, N.; Raymond, F.; Szkopek, T.; Martel, R.
2014-01-01
Raman spectroscopy uses visible light to acquire vibrational fingerprints of molecules, thus making it a powerful tool for chemical analysis in a wide range of media. However, its potential for optical imaging at high resolution is severely limited by the fact that the Raman effect is weak. Here, we report the discovery of a giant Raman scattering effect from encapsulated and aggregated dye molecules inside single-walled carbon nanotubes. Measurements performed on rod-like dyes such as α-sexithiophene and β-carotene, assembled inside single-walled carbon nanotubes as highly polarizable J-aggregates, indicate a resonant Raman cross-section of (3 +/- 2) × 10-21 cm2 sr-1, which is well above the cross-section required for detecting individual aggregates at the highest optical resolution. Free from fluorescence background and photobleaching, this giant Raman effect allows the realization of a library of functionalized nanoprobe labels for Raman imaging with robust detection using multispectral analysis.
NASA Technical Reports Server (NTRS)
Jedlovec, G. J.; Menzel, W. P.; Atkinson, R.; Wilson, G. S.; Arvesen, J.
1986-01-01
A new instrument has been developed to produce high resolution imagery in eight visible and three infared spectral bands from an aircraft platform. An analysis of the data and calibration procedures has shown that useful data can be obtained at up to 50 m resolution with a 2.5 milliradian aperture. Single sample standard errors for the measurements are 0.5, 0.2, and 0.9 K for the 6.5, 11.1, and 12.3 micron spectral bands, respectively. These errors are halved when a 5.0 milliradian aperture is used to obtain 100 m resolution data. Intercomparisons with VAS and AVHRR measurements show good relative calibration. MAMS development is part of a larger program to develop multispectral Earth imaging capabilities from space platforms during the 1990s.
Linear mixing model applied to coarse resolution satellite data
NASA Technical Reports Server (NTRS)
Holben, Brent N.; Shimabukuro, Yosio E.
1992-01-01
A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.
Hyperspectral and multispectral bioluminescence optical tomography for small animal imaging.
Chaudhari, Abhijit J; Darvas, Felix; Bading, James R; Moats, Rex A; Conti, Peter S; Smith, Desmond J; Cherry, Simon R; Leahy, Richard M
2005-12-07
For bioluminescence imaging studies in small animals, it is important to be able to accurately localize the three-dimensional (3D) distribution of the underlying bioluminescent source. The spectrum of light produced by the source that escapes the subject varies with the depth of the emission source because of the wavelength-dependence of the optical properties of tissue. Consequently, multispectral or hyperspectral data acquisition should help in the 3D localization of deep sources. In this paper, we describe a framework for fully 3D bioluminescence tomographic image acquisition and reconstruction that exploits spectral information. We describe regularized tomographic reconstruction techniques that use semi-infinite slab or FEM-based diffusion approximations of photon transport through turbid media. Singular value decomposition analysis was used for data dimensionality reduction and to illustrate the advantage of using hyperspectral rather than achromatic data. Simulation studies in an atlas-mouse geometry indicated that sub-millimeter resolution may be attainable given accurate knowledge of the optical properties of the animal. A fixed arrangement of mirrors and a single CCD camera were used for simultaneous acquisition of multispectral imaging data over most of the surface of the animal. Phantom studies conducted using this system demonstrated our ability to accurately localize deep point-like sources and show that a resolution of 1.5 to 2.2 mm for depths up to 6 mm can be achieved. We also include an in vivo study of a mouse with a brain tumour expressing firefly luciferase. Co-registration of the reconstructed 3D bioluminescent image with magnetic resonance images indicated good anatomical localization of the tumour.
NASA Technical Reports Server (NTRS)
Holub, R.; Shenk, W. E.
1973-01-01
Four registered channels (0.2 to 4, 6.5 to 7, 10 to 11, and 20 to 23 microns) of the Nimbus 3 Medium Resolution Infrared Radiometer (MRIR) were used to study 24-hr changes in the structure of an extratropical cyclone during a 6-day period in May 1969. Use of a stereographic-horizon map projection insured that the storm was mapped with a single perspective throughout the series and allowed the convenient preparation of 24-hr difference maps of the infrared radiation fields. Single-channel and multispectral analysis techniques were employed to establish the positions and vertical slopes of jetstreams, large cloud systems, and major features of middle and upper tropospheric circulation. Use of these techniques plus the difference maps and continuity of observation allowed the early detection of secondary cyclones developing within the circulation of the primary cyclone. An automated, multispectral cloud-type identification technique was developed, and comparisons that were made with conventional ship reports and with high-resolution visual data from the image dissector camera system showed good agreement.
Quadrilinear CCD sensors for the multispectral channel of spaceborne imagers
NASA Astrophysics Data System (ADS)
Materne, Alex; Gili, Bruno; Laubier, David; Gimenez, Thierry
2001-12-01
The PLEIADES-HR Earth Observation satellites will combine a high resolution panchromatic channel -- 0.7 m at nadir -- and a multispectral channel allowing a 2.8 m resolution. This paper presents the main specifications, design and performances of a 52 microns pitch quadrilinear CCD sensor developed by ATMEL under CNES contract, for the multispectral channel of the PLEIADES-HR instrument. The monolithic CCD device includes four lines of 1500 pixels, each line dedicated to a narrow spectral band within blue to near infra red spectrum. The design of the photodiodes and CCD registers, with larger size than those developed up to now for CNES spaceborne imagers, needed some specific structures to break the large equipotential areas where charge do not flow properly. Results are presented on the options which were experimented to improve sensitivity, maintain transfer efficiency and reduce power dissipation. The four spectral bands are achieved by four stripe filters made by SAGEM-REOSC PRODUCTS on a glass substrate, to be assembled on the sensor window. Line to line spacing on the silicon die takes into account the results of straylight analysis. A mineral layer, with high optical absorption performances is deposited between photosensitive lines to further reduce straylight.
NASA Technical Reports Server (NTRS)
Chirico, Peter G.
2007-01-01
This viewgraph presentation provides USGS/USAID natural resource assessments in Afghanistan through the mapping of coal, oil and natural gas, minerals, hydrologic resources and earthquake and flood hazards.
Assessing carotid atherosclerosis by fiber-optic multispectral photoacoustic tomography
NASA Astrophysics Data System (ADS)
Hui, Jie; Li, Rui; Wang, Pu; Phillips, Evan; Bruning, Rebecca; Liao, Chien-Sheng; Sturek, Michael; Goergen, Craig J.; Cheng, Ji-Xin
2015-03-01
Atherosclerotic plaque at the carotid bifurcation is the underlying cause of the majority of ischemic strokes. Noninvasive imaging and quantification of the compositional changes preceding gross anatomic changes within the arterial wall is essential for diagnosis of disease. Current imaging modalities such as duplex ultrasound, computed tomography, positron emission tomography are limited by the lack of compositional contrast and the detection of flow-limiting lesions. Although high-resolution magnetic resonance imaging has been developed to characterize atherosclerotic plaque composition, its accessibility for wide clinical use is limited. Here, we demonstrate a fiber-based multispectral photoacoustic tomography system for excitation of lipids and external acoustic detection of the generated ultrasound. Using sequential ultrasound imaging of ex vivo preparations we achieved ~2 cm imaging depth and chemical selectivity for assessment of human arterial plaques. A multivariate curve resolution alternating least squares analysis method was applied to resolve the major chemical components, including intravascular lipid, intramuscular fat, and blood. These results show the promise of detecting carotid plaque in vivo through esophageal fiber-optic excitation of lipids and external acoustic detection of the generated ultrasound. This imaging system has great potential for serving as a point-ofcare device for early diagnosis of carotid artery disease in the clinic.
Singha, Mrinal; Wu, Bingfang; Zhang, Miao
2016-01-01
Accurate and timely mapping of paddy rice is vital for food security and environmental sustainability. This study evaluates the utility of temporal features extracted from coarse resolution data for object-based paddy rice classification of fine resolution data. The coarse resolution vegetation index data is first fused with the fine resolution data to generate the time series fine resolution data. Temporal features are extracted from the fused data and added with the multi-spectral data to improve the classification accuracy. Temporal features provided the crop growth information, while multi-spectral data provided the pattern variation of paddy rice. The achieved overall classification accuracy and kappa coefficient were 84.37% and 0.68, respectively. The results indicate that the use of temporal features improved the overall classification accuracy of a single-date multi-spectral image by 18.75% from 65.62% to 84.37%. The minimum sensitivity (MS) of the paddy rice classification has also been improved. The comparison showed that the mapped paddy area was analogous to the agricultural statistics at the district level. This work also highlighted the importance of feature selection to achieve higher classification accuracies. These results demonstrate the potential of the combined use of temporal and spectral features for accurate paddy rice classification. PMID:28025525
Singha, Mrinal; Wu, Bingfang; Zhang, Miao
2016-12-22
Accurate and timely mapping of paddy rice is vital for food security and environmental sustainability. This study evaluates the utility of temporal features extracted from coarse resolution data for object-based paddy rice classification of fine resolution data. The coarse resolution vegetation index data is first fused with the fine resolution data to generate the time series fine resolution data. Temporal features are extracted from the fused data and added with the multi-spectral data to improve the classification accuracy. Temporal features provided the crop growth information, while multi-spectral data provided the pattern variation of paddy rice. The achieved overall classification accuracy and kappa coefficient were 84.37% and 0.68, respectively. The results indicate that the use of temporal features improved the overall classification accuracy of a single-date multi-spectral image by 18.75% from 65.62% to 84.37%. The minimum sensitivity (MS) of the paddy rice classification has also been improved. The comparison showed that the mapped paddy area was analogous to the agricultural statistics at the district level. This work also highlighted the importance of feature selection to achieve higher classification accuracies. These results demonstrate the potential of the combined use of temporal and spectral features for accurate paddy rice classification.
Imaging During MESSENGER's Second Flyby of Mercury
NASA Astrophysics Data System (ADS)
Chabot, N. L.; Prockter, L. M.; Murchie, S. L.; Robinson, M. S.; Laslo, N. R.; Kang, H. K.; Hawkins, S. E.; Vaughan, R. M.; Head, J. W.; Solomon, S. C.; MESSENGER Team
2008-12-01
During MESSENGER's second flyby of Mercury on October 6, 2008, the Mercury Dual Imaging System (MDIS) will acquire 1287 images. The images will include coverage of about 30% of Mercury's surface not previously seen by spacecraft. A portion of the newly imaged terrain will be viewed during the inbound portion of the flyby. On the outbound leg, MDIS will image additional previously unseen terrain as well as regions imaged under different illumination geometry by Mariner 10. These new images, when combined with images from Mariner 10 and from MESSENGER's first Mercury flyby, will enable the first regional- resolution global view of Mercury constituting a combined total coverage of about 96% of the planet's surface. MDIS consists of both a Wide Angle Camera (WAC) and a Narrow Angle Camera (NAC). During MESSENGER's second Mercury flyby, the following imaging activities are planned: about 86 minutes before the spacecraft's closest pass by the planet, the WAC will acquire images through 11 different narrow-band color filters of the approaching crescent planet at a resolution of about 5 km/pixel. At slightly less than 1 hour to closest approach, the NAC will acquire a 4-column x 11-row mosaic with an approximate resolution of 450 m/pixel. At 8 minutes after closest approach, the WAC will obtain the highest-resolution multispectral images to date of Mercury's surface, imaging a portion of the surface through 11 color filters at resolutions of about 250-600 m/pixel. A strip of high-resolution NAC images, with a resolution of approximately 100 m/pixel, will follow these WAC observations. The NAC will next acquire a 15-column x 13- row high-resolution mosaic of the northern hemisphere of the departing planet, beginning approximately 21 minutes after closest approach, with resolutions of 140-300 m/pixel; this mosaic will fill a large gore in the Mariner 10 data. At about 42 minutes following closest approach, the WAC will acquire a 3x3, 11-filter, full- planet mosaic with an average resolution of 2.5 km/pixel. Two NAC mosaics of the entire departing planet will be acquired beginning about 66 minutes after closest approach, with resolutions of 500-700 m/pixel. About 89 minutes following closest approach, the WAC will acquire a multispectral image set with a resolution of about 5 km/pixel. Following this WAC image set, MDIS will continue to acquire occasional images with both the WAC and NAC until 20 hours after closest approach, at which time the flyby data will begin being transmitted to Earth.
Core segment 15008 - Regolith stratigraphy at Apennine Front Station 2 using multispectral imaging
NASA Technical Reports Server (NTRS)
Pieters, C. M.; Meloy, A.; Hawke, B. R.; Nagle, J. S.
1982-01-01
High precision multispectral images for Apennine Front core segment 15008 are presented. These data have a spatial resolution less than approximately 0.5 mm and are analyzed for their compositional information using image analysis techniques. The stratigraphy of the regolith sampled by 15008 is documented here as three distinct zones, the most prominent of which is a feldspathic fragment-rich zone with a chaotic fabric that occurs between 10 and 18 cm depth. It is suggested that this material is the primary rim crest deposit of the local 10 m crater. Above this zone the stratigraphy is more horizontal in nature. Below this zone the soil is observed to be relatively homogeneous with no distinctive structure to 23 cm depth.
NASA Technical Reports Server (NTRS)
1972-01-01
This document is Volume 2 of three volumes of the Final Report for the four band Multispectral Scanner System (MSS). The results are contained of an analysis of pictures of actual outdoor scenes imaged by the engineering model MSS for spectral response, resolution, noise, and video correction. Also included are the results of engineering tests on the MSS for reflectance and saturation from clouds. Finally, two panoramic pictures of Yosemite National Park are provided.
The Multi-Spectral Imaging Diagnostic on Alcator C-MOD and TCV
NASA Astrophysics Data System (ADS)
Linehan, B. L.; Mumgaard, R. T.; Duval, B. P.; Theiler, C. G.; TCV Team
2017-10-01
The Multi-Spectral Imaging (MSI) diagnostic is a new instrument that captures simultaneous spectrally filtered images from a common sight view while maintaining a large tendue and high spatial resolution. The system uses a polychromator layout where each image is sequentially filtered. This procedure yields a high transmission for each spectral channel with minimal vignetting and aberrations. A four-wavelength system was installed on Alcator C-Mod and then moved to TCV. The system uses industrial cameras to simultaneously image the divertor region at 95 frames per second at f/# 2.8 via a coherent fiber bundle (C-Mod) or a lens-based relay optic (TCV). The images are absolutely calibrated and spatially registered enabling accurate measurement of atomic line ratios and absolute line intensities. The images will be used to study divertor detachment by imaging impurities and Balmer series emissions. Furthermore, the large field of view and an ability to support many types of detectors opens the door for other novel approaches to optically measuring plasma with high temporal, spatial, and spectral resolution. Such measurements will allow for the study of Stark broadening and divertor turbulence. Here, we present the first measurements taken with this cavity imaging system. USDoE awards DE-FC02-99ER54512 and award DE-AC05-06OR23100, ORISE, administered by ORAU.
Using a trichromatic CCD camera for spectral skylight estimation.
López-Alvarez, Miguel A; Hernández-Andrés, Javier; Romero, Javier; Olmo, F J; Cazorla, A; Alados-Arboledas, L
2008-12-01
In a previous work [J. Opt. Soc. Am. A 24, 942-956 (2007)] we showed how to design an optimum multispectral system aimed at spectral recovery of skylight. Since high-resolution multispectral images of skylight could be interesting for many scientific disciplines, here we also propose a nonoptimum but much cheaper and faster approach to achieve this goal by using a trichromatic RGB charge-coupled device (CCD) digital camera. The camera is attached to a fish-eye lens, hence permitting us to obtain a spectrum of every point of the skydome corresponding to each pixel of the image. In this work we show how to apply multispectral techniques to the sensors' responses of a common trichromatic camera in order to obtain skylight spectra from them. This spectral information is accurate enough to estimate experimental values of some climate parameters or to be used in algorithms for automatic cloud detection, among many other possible scientific applications.
Application of LC and LCoS in Multispectral Polarized Scene Projector (MPSP)
NASA Astrophysics Data System (ADS)
Yu, Haiping; Guo, Lei; Wang, Shenggang; Lippert, Jack; Li, Le
2017-02-01
A Multispectral Polarized Scene Projector (MPSP) had been developed in the short-wave infrared (SWIR) regime for the test & evaluation (T&E) of spectro-polarimetric imaging sensors. This MPSP generates multispectral and hyperspectral video images (up to 200 Hz) with 512×512 spatial resolution with active spatial, spectral, and polarization modulation with controlled bandwidth. It projects input SWIR radiant intensity scenes from stored memory with user selectable wavelength and bandwidth, as well as polarization states (six different states) controllable on a pixel level. The spectral contents are implemented by a tunable filter with variable bandpass built based on liquid crystal (LC) material, together with one passive visible and one passive SWIR cholesteric liquid crystal (CLC) notch filters, and one switchable CLC notch filter. The core of the MPSP hardware is the liquid-crystal-on-silicon (LCoS) spatial light modulators (SLMs) for intensity control and polarization modulation.
Image quality prediction: an aid to the Viking Lander imaging investigation on Mars.
Huck, F O; Wall, S D
1976-07-01
Two Viking spacecraft scheduled to land on Mars in the summer of 1976 will return multispectral panoramas of the Martian surface with resolutions 4 orders of magnitude higher than have been previously obtained and stereo views with resolutions approaching that of the human eye. Mission constraints and uncertainties require a carefully planned imaging investigation that is supported by a computer model of camera response and surface features to aid in diagnosing camera performance, in establishing a preflight imaging strategy, and in rapidly revising this strategy if pictures returned from Mars reveal unfavorable or unanticipated conditions.
Pancam Imaging of the Mars Exploration Rover Landing Sites in Gusev Crater and Meridiani Planum
NASA Technical Reports Server (NTRS)
Bell, J. F., III; Squyres, S. W.; Arvidson, R. E.; Arneson, H. M.; Bass, D.; Cabrol, N.; Calvin, W.; Farmer, J.; Farrand, W. H.
2004-01-01
The Mars Exploration Rovers carry four Panoramic Camera (Pancam) instruments (two per rover) that have obtained high resolution multispectral and stereoscopic images for studies of the geology, mineralogy, and surface and atmospheric physical properties at both rover landing sites. The Pancams are also providing significant mission support measurements for the rovers, including Sun-finding for rover navigation, hazard identification and digital terrain modeling to help guide long-term rover traverse decisions, high resolution imaging to help guide the selection of in situ sampling targets, and acquisition of education and public outreach imaging products.
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Fuell, Kevin K.; Knaff, John; Lee, Thomas
2012-01-01
Current and future satellite sensors provide remotely sensed quantities from a variety of wavelengths ranging from the visible to the passive microwave, from both geostationary and low-Earth orbits. The NASA Short-term Prediction Research and Transition (SPoRT) Center has a long history of providing multispectral imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard NASA s Terra and Aqua satellites in support of NWS forecast office activities. Products from MODIS have recently been extended to include a broader suite of multispectral imagery similar to those developed by EUMETSAT, based upon the spectral channel s available from the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) aboard METEOSAT-9. This broader suite includes products that discriminate between air mass types associated with synoptic-scale features, assists in the identification of dust, and improves upon paired channel difference detection of fog and low cloud events. Similarly, researchers at NOAA/NESDIS and CIRA have developed air mass discrimination capabilities using channels available from the current GOES Sounders. Other applications of multispectral composites include combinations of high and low frequency, horizontal and vertically polarized passive microwave brightness temperatures to discriminate tropical cyclone structures and other synoptic-scale features. Many of these capabilities have been transitioned for evaluation and operational use at NWS Weather Forecast Offices and National Centers through collaborations with SPoRT and CIRA. Future instruments will continue the availability of these products and also expand upon current capabilities. The Advanced Baseline Imager (ABI) on GOES-R will improve the spectral, spatial, and temporal resolution of our current geostationary capabilities, and the recent launch of the Suomi National Polar-Orbiting Partnership (S-NPP) carries instruments such as the Visible Infrared Imager Radiometer Suite (VIIRS), the Cross-track Infrared Sounder (CrIS), and the Advanced Technology Microwave Sounder (ATMS), which have unrivaled spectral and spatial resolution, as precursors to the JPSS era (i.e., the next generation of polar orbiting satellites). At the same time, new image manipulation and display capabilities are available within AWIPS II, the next generation of the NWS forecaster decision support system. This presentation will present a review of SPoRT, CIRA, and NRL collaborations regarding multispectral satellite imagery and articulate an integrated and collaborative path forward with Raytheon AWIPS II development staff for integrating current and future capabilities that support new satellite instrumentation and the AWIPS II decision support system.
Random-Forest Classification of High-Resolution Remote Sensing Images and Ndsm Over Urban Areas
NASA Astrophysics Data System (ADS)
Sun, X. F.; Lin, X. G.
2017-09-01
As an intermediate step between raw remote sensing data and digital urban maps, remote sensing data classification has been a challenging and long-standing research problem in the community of remote sensing. In this work, an effective classification method is proposed for classifying high-resolution remote sensing data over urban areas. Starting from high resolution multi-spectral images and 3D geometry data, our method proceeds in three main stages: feature extraction, classification, and classified result refinement. First, we extract color, vegetation index and texture features from the multi-spectral image and compute the height, elevation texture and differential morphological profile (DMP) features from the 3D geometry data. Then in the classification stage, multiple random forest (RF) classifiers are trained separately, then combined to form a RF ensemble to estimate each sample's category probabilities. Finally the probabilities along with the feature importance indicator outputted by RF ensemble are used to construct a fully connected conditional random field (FCCRF) graph model, by which the classification results are refined through mean-field based statistical inference. Experiments on the ISPRS Semantic Labeling Contest dataset show that our proposed 3-stage method achieves 86.9% overall accuracy on the test data.
Jones, Phill B.; Shin, Hwa Kyoung; Boas, David A.; Hyman, Bradley T.; Moskowitz, Michael A.; Ayata, Cenk; Dunn, Andrew K.
2009-01-01
Real-time investigation of cerebral blood flow (CBF), and oxy- and deoxyhemoglobin concentration (HbO, HbR) dynamics has been difficult until recently due to limited spatial and temporal resolution of techniques like laser Doppler flowmetry and magnetic resonance imaging (MRI). The combination of laser speckle flowmetry (LSF) and multispectral reflectance imaging (MSRI) yields high-resolution spatiotemporal maps of hemodynamic and metabolic changes in response to functional cortical activation. During acute focal cerebral ischemia, changes in HbO and HbR are much larger than in functional activation, resulting in the failure of the Beer-Lambert approximation to yield accurate results. We describe the use of simultaneous LSF and MSRI, using a nonlinear Monte Carlo fitting technique, to record rapid changes in CBF, HbO, HbR, and cerebral metabolic rate of oxygen (CMRO2) during acute focal cerebral ischemia induced by distal middle cerebral artery occlusion (dMCAO) and reperfusion. This technique captures CBF and CMRO2 changes during hemodynamic and metabolic events with high temporal and spatial resolution through the intact skull and demonstrates the utility of simultaneous LSF and MSRI in mouse models of cerebrovascular disease. PMID:19021335
Quality evaluation of different fusion techniques applied on Worldview-2 data
NASA Astrophysics Data System (ADS)
Vaiopoulos, Aristides; Nikolakopoulos, Konstantinos G.
2015-10-01
In the current study a Worldview-2 image was used for fusion quality assessment. The bundle image was collected on July 2014 over Araxos area in Western Peloponnese. Worldview-2 is the first satellite that collects at the same time a panchromatic (Pan) image and 8 band multispectral (MS) image. The Pan data have a spatial resolution of 0.46m while the MS data have a spatial resolution of 1.84m. In contrary to the respective Pan band of Ikonos and Quickbird that range between 0.45 and 0.90 micrometers the Worldview Pan band is narrower and ranges between 0.45 and 0.8 micrometers. The MS bands include four conventional visible and near-infrared bands common to multispectral satellites like Ikonos Quickbird, Geoeye Landsat-7 etc., and four new bands. Thus, it is quite interesting to investigate the assessment of commonly used fusion algorithms with Worldview-2 data. Twelve fusion techniques and more especially the Ehlers, Gram-Schmidt, Color Normalized, High Pass Filter, Hyperspherical Color Space, Local Mean Matching (LMM), Local Mean and Variance Matching (LMVM), Modified IHS (ModIHS), Pansharp, Pansharp2, PCA and Wavelet were used for the fusion of Worldview-2 panchromatic and multispectral data. The optical result, the statistical parameters and different quality indexes such as ERGAS, Q and entropy difference were examined and the results are presented. The quality control was evaluated both in spectral and spatial domain.
NASA Astrophysics Data System (ADS)
Navarro-Cerrillo, Rafael Mª; Trujillo, Jesus; de la Orden, Manuel Sánchez; Hernández-Clemente, Rocío
2014-02-01
A new generation of narrow-band hyperspectral remote sensing data offers an alternative to broad-band multispectral data for the estimation of vegetation chlorophyll content. This paper examines the potential of some of these sensors comparing red-edge and simple ratio indices to develop a rapid and cost-effective system for monitoring Mediterranean pine plantations in Spain. Chlorophyll content retrieval was analyzed with the red-edge R750/R710 index and the simple ratio R800/R560 index using the PROSPECT-5 leaf model and the Discrete Anisotropic Radiative Transfer (DART) and experimental approach. Five sensors were used: AHS, CHRIS/Proba, Hyperion, Landsat and QuickBird. The model simulation results obtained with synthetic spectra demonstrated the feasibility of estimating Ca + b content in conifers using the simple ratio R800/R560 index formulated with different full widths at half maximum (FWHM) at the leaf level. This index yielded a r2 = 0.69 for a FWHM of 30 nm and r2 = 0.55 for a FWHM of 70 nm. Experimental results compared the regression coefficients obtained with various multispectral and hyperspectral images with different spatial resolutions at the stand level. The strongest relationships where obtained using high-resolution hyperspectral images acquired with the AHS sensor (r2 = 0.65) while coarser spatial and spectral resolution images yielded a lower root mean square error (QuickBird r2 = 0.42; Landsat r2 = 0.48; Hyperion r2 = 0.56; CHRIS/Proba r2 = 0.57). This study shows the need to estimate chlorophyll content in forest plantations at the stand level with high spatial and spectral resolution sensors. Nevertheless, these results also show the accuracy obtained with medium-resolution sensors when monitoring physiological processes. Generating biochemical maps at the stand level could play a critical rule in the early detection of forest decline processes enabling their use in precision forestry.
An Overview of the CBERS-2 Satellite and Comparison of the CBERS-2 CCD Data with the L5 TM Data
NASA Technical Reports Server (NTRS)
Chandler, Gyanesh
2007-01-01
CBERS satellite carries on-board a multi sensor payload with different spatial resolutions and collection frequencies. HRCCD (High Resolution CCD Camera), IRMSS (Infrared Multispectral Scanner), and WFI (Wide-Field Imager). The CCD and the WFI camera operate in the VNIR regions, while the IRMSS operates in SWIR and thermal region. In addition to the imaging payload, the satellite carries a Data Collection System (DCS) and Space Environment Monitor (SEM).
Early Results from the Odyssey THEMIS Investigation
NASA Technical Reports Server (NTRS)
Christensen, Philip R.; Bandfield, Joshua L.; Bell, James F., III; Hamilton, Victoria E.; Ivanov, Anton; Jakosky, Bruce M.; Kieffer, Hugh H.; Lane, Melissa D.; Malin, Michael C.; McConnochie, Timothy
2003-01-01
The Thermal Emission Imaging System (THEMIS) began studying the surface and atmosphere of Mars in February, 2002 using thermal infrared (IR) multi-spectral imaging between 6.5 and 15 m, and visible/near-IR images from 450 to 850 nm. The infrared observations continue a long series of spacecraft observations of Mars, including the Mariner 6/7 Infrared Spectrometer, the Mariner 9 Infrared Interferometer Spectrometer (IRIS), the Viking Infrared Thermal Mapper (IRTM) investigations, the Phobos Termoscan, and the Mars Global Surveyor Thermal Emission Spectrometer (MGS TES). The THEMIS investigation's specific objectives are to: (1) determine the mineralogy of localized deposits associated with hydrothermal or sub-aqueous environments, and to identify future landing sites likely to represent these environments; (2) search for thermal anomalies associated with active sub-surface hydrothermal systems; (3) study small-scale geologic processes and landing site characteristics using morphologic and thermophysical properties; (4) investigate polar cap processes at all seasons; and (5) provide a high spatial resolution link to the global hyperspectral mineral mapping from the TES investigation. THEMIS provides substantially higher spatial resolution IR multi-spectral images to complement TES hyperspectral (143-band) global mapping, and regional visible imaging at scales intermediate between the Viking and MGS cameras.
Satellite image fusion based on principal component analysis and high-pass filtering.
Metwalli, Mohamed R; Nasr, Ayman H; Allah, Osama S Farag; El-Rabaie, S; Abd El-Samie, Fathi E
2010-06-01
This paper presents an integrated method for the fusion of satellite images. Several commercial earth observation satellites carry dual-resolution sensors, which provide high spatial resolution or simply high-resolution (HR) panchromatic (pan) images and low-resolution (LR) multi-spectral (MS) images. Image fusion methods are therefore required to integrate a high-spectral-resolution MS image with a high-spatial-resolution pan image to produce a pan-sharpened image with high spectral and spatial resolutions. Some image fusion methods such as the intensity, hue, and saturation (IHS) method, the principal component analysis (PCA) method, and the Brovey transform (BT) method provide HR MS images, but with low spectral quality. Another family of image fusion methods, such as the high-pass-filtering (HPF) method, operates on the basis of the injection of high frequency components from the HR pan image into the MS image. This family of methods provides less spectral distortion. In this paper, we propose the integration of the PCA method and the HPF method to provide a pan-sharpened MS image with superior spatial resolution and less spectral distortion. The experimental results show that the proposed fusion method retains the spectral characteristics of the MS image and, at the same time, improves the spatial resolution of the pan-sharpened image.
NASA Astrophysics Data System (ADS)
Kingfield, D.; de Beurs, K.
2014-12-01
It has been demonstrated through various case studies that multispectral satellite imagery can be utilized in the identification of damage caused by a tornado through the change detection process. This process involves the difference in returned surface reflectance between two images and is often summarized through a variety of ratio-based vegetation indices (VIs). Land cover type plays a large contributing role in the change detection process as the reflectance properties of vegetation can vary based on several factors (e.g. species, greenness, density). Consequently, this provides the possibility for a variable magnitude of loss, making certain land cover regimes less reliable in the damage identification process. Furthermore, the tradeoff between sensor resolution and orbital return period may also play a role in the ability to detect catastrophic loss. Moderate resolution imagery (e.g. Moderate Resolution Imaging Spectroradiometer (MODIS)) provides relatively coarse surface detail with a higher update rate which could hinder the identification of small regions that underwent a dynamic change. Alternatively, imagery with higher spatial resolution (e.g. Landsat) have a longer temporal return period between successive images which could result in natural recovery underestimating the absolute magnitude of damage incurred. This study evaluates the role of land cover type and sensor resolution on four high-end (EF3+) tornado events occurring in four different land cover groups (agriculture, forest, grassland, urban) in the spring season. The closest successive clear images from both Landsat 5 and MODIS are quality controlled for each case. Transacts of surface reflectance across a homogenous land cover type both inside and outside the damage swath are extracted. These metrics are synthesized through the calculation of six different VIs to rank the calculated change metrics by land cover type, sensor resolution and VI.
Landsat multispectral sharpening using a sensor system model and panchromatic image
Lemeshewsky, G.P.; ,
2003-01-01
The thematic mapper (TM) sensor aboard Landsats 4, 5 and enhanced TM plus (ETM+) on Landsat 7 collect imagery at 30-m sample distance in six spectral bands. New with ETM+ is a 15-m panchromatic (P) band. With image sharpening techniques, this higher resolution P data, or as an alternative, the 10-m (or 5-m) P data of the SPOT satellite, can increase the spatial resolution of the multispectral (MS) data. Sharpening requires that the lower resolution MS image be coregistered and resampled to the P data before high spatial frequency information is transferred to the MS data. For visual interpretation and machine classification tasks, it is important that the sharpened data preserve the spectral characteristics of the original low resolution data. A technique was developed for sharpening (in this case, 3:1 spatial resolution enhancement) visible spectral band data, based on a model of the sensor system point spread function (PSF) in order to maintain spectral fidelity. It combines high-pass (HP) filter sharpening methods with iterative image restoration to reduce degradations caused by sensor-system-induced blurring and resembling. Also there is a spectral fidelity requirement: sharpened MS when filtered by the modeled degradations should reproduce the low resolution source MS. Quantitative evaluation of sharpening performance was made by using simulated low resolution data generated from digital color-IR aerial photography. In comparison to the HP-filter-based sharpening method, results for the technique in this paper with simulated data show improved spectral fidelity. Preliminary results with TM 30-m visible band data sharpened with simulated 10-m panchromatic data are promising but require further study.
The High Resolution Stereo Camera (HRSC): 10 Years of Imaging Mars
NASA Astrophysics Data System (ADS)
Jaumann, R.; Neukum, G.; Tirsch, D.; Hoffmann, H.
2014-04-01
The HRSC Experiment: Imagery is the major source for our current understanding of the geologic evolution of Mars in qualitative and quantitative terms.Imaging is required to enhance our knowledge of Mars with respect to geological processes occurring on local, regional and global scales and is an essential prerequisite for detailed surface exploration. The High Resolution Stereo Camera (HRSC) of ESA's Mars Express Mission (MEx) is designed to simultaneously map the morphology, topography, structure and geologic context of the surface of Mars as well as atmospheric phenomena [1]. The HRSC directly addresses two of the main scientific goals of the Mars Express mission: (1) High-resolution three-dimensional photogeologic surface exploration and (2) the investigation of surface-atmosphere interactions over time; and significantly supports: (3) the study of atmospheric phenomena by multi-angle coverage and limb sounding as well as (4) multispectral mapping by providing high-resolution threedimensional color context information. In addition, the stereoscopic imagery will especially characterize landing sites and their geologic context [1]. The HRSC surface resolution and the digital terrain models bridge the gap in scales between highest ground resolution images (e.g., HiRISE) and global coverage observations (e.g., Viking). This is also the case with respect to DTMs (e.g., MOLA and local high-resolution DTMs). HRSC is also used as cartographic basis to correlate between panchromatic and multispectral stereo data. The unique multi-angle imaging technique of the HRSC supports its stereo capability by providing not only a stereo triplet but also a stereo quintuplet, making the photogrammetric processing very robust [1, 3]. The capabilities for three dimensional orbital reconnaissance of the Martian surface are ideally met by HRSC making this camera unique in the international Mars exploration effort.
Demosaicking for full motion video 9-band SWIR sensor
NASA Astrophysics Data System (ADS)
Kanaev, Andrey V.; Rawhouser, Marjorie; Kutteruf, Mary R.; Yetzbacher, Michael K.; DePrenger, Michael J.; Novak, Kyle M.; Miller, Corey A.; Miller, Christopher W.
2014-05-01
Short wave infrared (SWIR) spectral imaging systems are vital for Intelligence, Surveillance, and Reconnaissance (ISR) applications because of their abilities to autonomously detect targets and classify materials. Typically the spectral imagers are incapable of providing Full Motion Video (FMV) because of their reliance on line scanning. We enable FMV capability for a SWIR multi-spectral camera by creating a repeating pattern of 3x3 spectral filters on a staring focal plane array (FPA). In this paper we present the imagery from an FMV SWIR camera with nine discrete bands and discuss image processing algorithms necessary for its operation. The main task of image processing in this case is demosaicking of the spectral bands i.e. reconstructing full spectral images with original FPA resolution from spatially subsampled and incomplete spectral data acquired with the choice of filter array pattern. To the best of author's knowledge, the demosaicking algorithms for nine or more equally sampled bands have not been reported before. Moreover all existing algorithms developed for demosaicking visible color filter arrays with less than nine colors assume either certain relationship between the visible colors, which are not valid for SWIR imaging, or presence of one color band with higher sampling rate compared to the rest of the bands, which does not conform to our spectral filter pattern. We will discuss and present results for two novel approaches to demosaicking: interpolation using multi-band edge information and application of multi-frame super-resolution to a single frame resolution enhancement of multi-spectral spatially multiplexed images.
Multispectral image sharpening using wavelet transform techniques and spatial correlation of edges
Lemeshewsky, George P.; Schowengerdt, Robert A.
2000-01-01
Several reported image fusion or sharpening techniques are based on the discrete wavelet transform (DWT). The technique described here uses a pixel-based maximum selection rule to combine respective transform coefficients of lower spatial resolution near-infrared (NIR) and higher spatial resolution panchromatic (pan) imagery to produce a sharpened NIR image. Sharpening assumes a radiometric correlation between the spectral band images. However, there can be poor correlation, including edge contrast reversals (e.g., at soil-vegetation boundaries), between the fused images and, consequently, degraded performance. To improve sharpening, a local area-based correlation technique originally reported for edge comparison with image pyramid fusion is modified for application with the DWT process. Further improvements are obtained by using redundant, shift-invariant implementation of the DWT. Example images demonstrate the improvements in NIR image sharpening with higher resolution pan imagery.
High-quality infrared imaging with graphene photodetectors at room temperature.
Guo, Nan; Hu, Weida; Jiang, Tao; Gong, Fan; Luo, Wenjin; Qiu, Weicheng; Wang, Peng; Liu, Lu; Wu, Shiwei; Liao, Lei; Chen, Xiaoshuang; Lu, Wei
2016-09-21
Graphene, a two-dimensional material, is expected to enable broad-spectrum and high-speed photodetection because of its gapless band structure, ultrafast carrier dynamics and high mobility. We demonstrate a multispectral active infrared imaging by using a graphene photodetector based on hybrid response mechanisms at room temperature. The high-quality images with optical resolutions of 418 nm, 657 nm and 877 nm and close-to-theoretical-limit Michelson contrasts of 0.997, 0.994, and 0.996 have been acquired for 565 nm, 1550 nm, and 1815 nm light imaging measurements by using an unbiased graphene photodetector, respectively. Importantly, by carefully analyzing the results of Raman mapping and numerical simulations for the response process, the formation of hybrid photocurrents in graphene detectors is attributed to the synergistic action of photovoltaic and photo-thermoelectric effects. The initial application to infrared imaging will help promote the development of high performance graphene-based infrared multispectral detectors.
Wavelet compression techniques for hyperspectral data
NASA Technical Reports Server (NTRS)
Evans, Bruce; Ringer, Brian; Yeates, Mathew
1994-01-01
Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet transform coder was used for the two-dimensional compression. The third case used a three dimensional extension of this same algorithm.
Object-oriented recognition of high-resolution remote sensing image
NASA Astrophysics Data System (ADS)
Wang, Yongyan; Li, Haitao; Chen, Hong; Xu, Yuannan
2016-01-01
With the development of remote sensing imaging technology and the improvement of multi-source image's resolution in satellite visible light, multi-spectral and hyper spectral , the high resolution remote sensing image has been widely used in various fields, for example military field, surveying and mapping, geophysical prospecting, environment and so forth. In remote sensing image, the segmentation of ground targets, feature extraction and the technology of automatic recognition are the hotspot and difficulty in the research of modern information technology. This paper also presents an object-oriented remote sensing image scene classification method. The method is consist of vehicles typical objects classification generation, nonparametric density estimation theory, mean shift segmentation theory, multi-scale corner detection algorithm, local shape matching algorithm based on template. Remote sensing vehicles image classification software system is designed and implemented to meet the requirements .
An integrated approach for updating cadastral maps in Pakistan using satellite remote sensing data
NASA Astrophysics Data System (ADS)
Ali, Zahir; Tuladhar, Arbind; Zevenbergen, Jaap
2012-08-01
Updating cadastral information is crucial for recording land ownership and property division changes in a timely fashioned manner. In most cases, the existing cadastral maps do not provide up-to-date information on land parcel boundaries. Such a situation demands that all the cadastral data and parcel boundaries information in these maps to be updated in a timely fashion. The existing techniques for acquiring cadastral information are discipline-oriented based on different disciplines such as geodesy, surveying, and photogrammetry. All these techniques require a large number of manpower, time, and cost when they are carried out separately. There is a need to integrate these techniques for acquiring cadastral information to update the existing cadastral data and (re)produce cadastral maps in an efficient manner. To reduce the time and cost involved in cadastral data acquisition, this study develops an integrated approach by integrating global position system (GPS) data, remote sensing (RS) imagery, and existing cadastral maps. For this purpose, the panchromatic image with 0.6 m spatial resolution and the corresponding multi-spectral image with 2.4 m spatial resolution and 3 spectral bands from QuickBird satellite were used. A digital elevation model (DEM) was extracted from SPOT-5 stereopairs and some ground control points (GCPs) were also used for ortho-rectifying the QuickBird images. After ortho-rectifying these images and registering the multi-spectral image to the panchromatic image, fusion between them was attained to get good quality multi-spectral images of these two study areas with 0.6 m spatial resolution. Cadastral parcel boundaries were then identified on QuickBird images of the two study areas via visual interpretation using participatory-GIS (PGIS) technique. The regions of study are the urban and rural areas of Peshawar and Swabi districts in the Khyber Pakhtunkhwa province of Pakistan. The results are the creation of updated cadastral maps with a lot of cadastral information which can be used in updating the existing cadastral data with less time and cost.
Geo-oculus: high resolution multi-spectral earth imaging mission from geostationary orbit
NASA Astrophysics Data System (ADS)
Vaillon, L.; Schull, U.; Knigge, T.; Bevillon, C.
2017-11-01
Geo-Oculus is a GEO-based Earth observation mission studied by Astrium for ESA in 2008-2009 to complement the Sentinel missions, the space component of the GMES (Global Monitoring for Environment & Security). Indeed Earth imaging from geostationary orbit offers new functionalities not covered by existing LEO observation missions, like real-time monitoring and fast revisit capability of any location within the huge area in visibility of the satellite. This high revisit capability is exploited by the Meteosat meteorogical satellites, but with a spatial resolution (500 m nadir for the third generation) far from most of GMES needs (10 to 100 m). To reach such ground resolution from GEO orbit with adequate image quality, large aperture instruments (> 1 m) and high pointing stability (<< 1 μrad) are required, which are the major challenges of such missions. To address the requirements from the GMES user community, the Geo-Oculus mission is a combination of routine observations (daily systematic coverage of European coastal waters) with "on-demand" observation for event monitoring (e.g. disasters, fires and oil slicks). The instrument is a large aperture imaging telescope (1.5 m diameter) offering a nadir spatial sampling of 10.5 m (21 m worst case over Europe, below 52.5°N) in a PAN visible channel used for disaster monitoring. The 22 multi-spectral channels have resolutions over Europe ranging from 40 m in UV/VNIR (0.3 to 1 μm) to 750 m in TIR (10-12 μm).
Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.
2002-01-01
Enhanced false color images from mid-IR, near-IR (NIR), and visible bands of the Landsat thematic mapper (TM) are commonly used for visually interpreting land cover type. Described here is a technique for sharpening or fusion of NIR with higher resolution panchromatic (Pan) that uses a shift-invariant implementation of the discrete wavelet transform (SIDWT) and a reported pixel-based selection rule to combine coefficients. There can be contrast reversals (e.g., at soil-vegetation boundaries between NIR and visible band images) and consequently degraded sharpening and edge artifacts. To improve performance for these conditions, I used a local area-based correlation technique originally reported for comparing image-pyramid-derived edges for the adaptive processing of wavelet-derived edge data. Also, using the redundant data of the SIDWT improves edge data generation. There is additional improvement because sharpened subband imagery is used with the edge-correlation process. A reported technique for sharpening three-band spectral imagery used forward and inverse intensity, hue, and saturation transforms and wavelet-based sharpening of intensity. This technique had limitations with opposite contrast data, and in this study sharpening was applied to single-band multispectral-Pan image pairs. Sharpening used simulated 30-m NIR imagery produced by degrading the spatial resolution of a higher resolution reference. Performance, evaluated by comparison between sharpened and reference image, was improved when sharpened subband data were used with the edge correlation.
Land use change detection based on multi-date imagery from different satellite sensor systems
NASA Technical Reports Server (NTRS)
Stow, Douglas A.; Collins, Doretta; Mckinsey, David
1990-01-01
An empirical study is conducted to assess the accuracy of land use change detection using satellite image data acquired ten years apart by sensors with differing spatial resolutions. The primary goals of the investigation were to (1) compare standard change detection methods applied to image data of varying spatial resolution, (2) assess whether to transform the raster grid of the higher resolution image data to that of the lower resolution raster grid or vice versa in the registration process, (3) determine if Landsat/Thermatic Mapper or SPOT/High Resolution Visible multispectral data provide more accurate detection of land use changes when registered to historical Landsat/MSS data. It is concluded that image ratioing of multisensor, multidate satellite data produced higher change detection accuracies than did principal components analysis, and that it is useful as a land use change enhancement method.
Multispectral high-resolution hologram generation using orthographic projection images
NASA Astrophysics Data System (ADS)
Muniraj, I.; Guo, C.; Sheridan, J. T.
2016-08-01
We present a new method of synthesizing a digital hologram of three-dimensional (3D) real-world objects from multiple orthographic projection images (OPI). A high-resolution multiple perspectives of 3D objects (i.e., two dimensional elemental image array) are captured under incoherent white light using synthetic aperture integral imaging (SAII) technique and their OPIs are obtained respectively. The reference beam is then multiplied with the corresponding OPI and integrated to form a Fourier hologram. Eventually, a modified phase retrieval algorithm (GS/HIO) is applied to reconstruct the hologram. The principle is validated experimentally and the results support the feasibility of the proposed method.
Estimating atmospheric parameters and reducing noise for multispectral imaging
Conger, James Lynn
2014-02-25
A method and system for estimating atmospheric radiance and transmittance. An atmospheric estimation system is divided into a first phase and a second phase. The first phase inputs an observed multispectral image and an initial estimate of the atmospheric radiance and transmittance for each spectral band and calculates the atmospheric radiance and transmittance for each spectral band, which can be used to generate a "corrected" multispectral image that is an estimate of the surface multispectral image. The second phase inputs the observed multispectral image and the surface multispectral image that was generated by the first phase and removes noise from the surface multispectral image by smoothing out change in average deviations of temperatures.
A new hyperspectral image compression paradigm based on fusion
NASA Astrophysics Data System (ADS)
Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto
2016-10-01
The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.
Increasing the UAV data value by an OBIA methodology
NASA Astrophysics Data System (ADS)
García-Pedrero, Angel; Lillo-Saavedra, Mario; Rodriguez-Esparragon, Dionisio; Rodriguez-Gonzalez, Alejandro; Gonzalo-Martin, Consuelo
2017-10-01
Recently, there has been a noteworthy increment of using images registered by unmanned aerial vehicles (UAV) in different remote sensing applications. Sensors boarded on UAVs has lower operational costs and complexity than other remote sensing platforms, quicker turnaround times as well as higher spatial resolution. Concerning this last aspect, particular attention has to be paid on the limitations of classical algorithms based on pixels when they are applied to high resolution images. The objective of this study is to investigate the capability of an OBIA methodology developed for the automatic generation of a digital terrain model of an agricultural area from Digital Elevation Model (DEM) and multispectral images registered by a Parrot Sequoia multispectral sensor board on a eBee SQ agricultural drone. The proposed methodology uses a superpixel approach for obtaining context and elevation information used for merging superpixels and at the same time eliminating objects such as trees in order to generate a Digital Terrain Model (DTM) of the analyzed area. Obtained results show the potential of the approach, in terms of accuracy, when it is compared with a DTM generated by manually eliminating objects.
NASA Technical Reports Server (NTRS)
Krohn, M. Dennis
1986-01-01
The U.S. Geological Survey (USGS) acquired airborne Thermal Infrared Multispectral Scanner (TIMS) images over several disseminated gold deposits in northern Nevada in 1983. The aerial surveys were flown to determine whether TIMS data could depict jasperoids (siliceous replacement bodies) associated with the gold deposits. The TIMS data were collected over the Pinson and Getchell Mines in the Osgood Mountains, the Carlin, Maggie Creek, Bootstrap, and other mines in the Tuscarora Mountains, and the Jerritt Canyon Mine in the Independence Mountains. The TIMS data seem to be a useful supplement to conventional geochemical exploration for disseminated gold deposits in the western United States. Siliceous outcrops are readily separable in the TIMS image from other types of host rocks. Different forms of silicification are not readily separable, yet, due to limitations of spatial resolution and spectral dynamic range. Features associated with the disseminated gold deposits, such as the large intrusive bodies and fault structures, are also resolvable on TIMS data. Inclusion of high-resolution thermal inertia data would be a useful supplement to the TIMS data.
An Improved Pansharpening Method for Misaligned Panchromatic and Multispectral Data
Jing, Linhai; Tang, Yunwei; Ding, Haifeng
2018-01-01
Numerous pansharpening methods were proposed in recent decades for fusing low-spatial-resolution multispectral (MS) images with high-spatial-resolution (HSR) panchromatic (PAN) bands to produce fused HSR MS images, which are widely used in various remote sensing tasks. The effect of misregistration between MS and PAN bands on quality of fused products has gained much attention in recent years. An improved method for misaligned MS and PAN imagery is proposed, through two improvements made on a previously published method named RMI (reduce misalignment impact). The performance of the proposed method was assessed by comparing with some outstanding fusion methods, such as adaptive Gram-Schmidt and generalized Laplacian pyramid. Experimental results show that the improved version can reduce spectral distortions of fused dark pixels and sharpen boundaries between different image objects, as well as obtain similar quality indexes with the original RMI method. In addition, the proposed method was evaluated with respect to its sensitivity to misalignments between MS and PAN bands. It is certified that the proposed method is more robust to misalignments between MS and PAN bands than the other methods. PMID:29439502
An Improved Pansharpening Method for Misaligned Panchromatic and Multispectral Data.
Li, Hui; Jing, Linhai; Tang, Yunwei; Ding, Haifeng
2018-02-11
Numerous pansharpening methods were proposed in recent decades for fusing low-spatial-resolution multispectral (MS) images with high-spatial-resolution (HSR) panchromatic (PAN) bands to produce fused HSR MS images, which are widely used in various remote sensing tasks. The effect of misregistration between MS and PAN bands on quality of fused products has gained much attention in recent years. An improved method for misaligned MS and PAN imagery is proposed, through two improvements made on a previously published method named RMI (reduce misalignment impact). The performance of the proposed method was assessed by comparing with some outstanding fusion methods, such as adaptive Gram-Schmidt and generalized Laplacian pyramid. Experimental results show that the improved version can reduce spectral distortions of fused dark pixels and sharpen boundaries between different image objects, as well as obtain similar quality indexes with the original RMI method. In addition, the proposed method was evaluated with respect to its sensitivity to misalignments between MS and PAN bands. It is certified that the proposed method is more robust to misalignments between MS and PAN bands than the other methods.
An integrated compact airborne multispectral imaging system using embedded computer
NASA Astrophysics Data System (ADS)
Zhang, Yuedong; Wang, Li; Zhang, Xuguo
2015-08-01
An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.
Zhang, Dongyan; Zhou, Xingen; Zhang, Jian; Lan, Yubin; Xu, Chao; Liang, Dong
2018-01-01
Detection and monitoring are the first essential step for effective management of sheath blight (ShB), a major disease in rice worldwide. Unmanned aerial systems have a high potential of being utilized to improve this detection process since they can reduce the time needed for scouting for the disease at a field scale, and are affordable and user-friendly in operation. In this study, a commercialized quadrotor unmanned aerial vehicle (UAV), equipped with digital and multispectral cameras, was used to capture imagery data of research plots with 67 rice cultivars and elite lines. Collected imagery data were then processed and analyzed to characterize the development of ShB and quantify different levels of the disease in the field. Through color features extraction and color space transformation of images, it was found that the color transformation could qualitatively detect the infected areas of ShB in the field plots. However, it was less effective to detect different levels of the disease. Five vegetation indices were then calculated from the multispectral images, and ground truths of disease severity and GreenSeeker measured NDVI (Normalized Difference Vegetation Index) were collected. The results of relationship analyses indicate that there was a strong correlation between ground-measured NDVIs and image-extracted NDVIs with the R2 of 0.907 and the root mean square error (RMSE) of 0.0854, and a good correlation between image-extracted NDVIs and disease severity with the R2 of 0.627 and the RMSE of 0.0852. Use of image-based NDVIs extracted from multispectral images could quantify different levels of ShB in the field plots with an accuracy of 63%. These results demonstrate that a customer-grade UAV integrated with digital and multispectral cameras can be an effective tool to detect the ShB disease at a field scale.
Fusion and quality analysis for remote sensing images using contourlet transform
NASA Astrophysics Data System (ADS)
Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram
2013-05-01
Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.
Multispectral imaging determination of pigment concentration profiles in meat
NASA Astrophysics Data System (ADS)
Sáenz Gamasa, Carlos; Hernández Salueña, Begoña; Alberdi Odriozola, Coro; Alfonso Ábrego, Santiago; Berrogui Arizu, Miguel; Diñeiro Rubial, José Manuel
2006-01-01
The possibility of using multispectral techniques to determine the concentration profiles of myoglobin derivatives as a function of the distance to the meat surface during meat oxygenation is demonstrated. Reduced myoglobin (Mb) oxygenated oxymyoglobin (MbO II) and oxidized Metmyoglobin (MMb) concentration profiles are determined with a spatial resolutions better than of 0.01235 mm/pixel. Pigment concentrations are calculated using (K/S) ratios at isobestic points (474, 525, 572 and 610 nm) of the three forms of myoglobin pigments. This technique greatly improves previous methods, based on visual determination of pigment layers by their color, which allowed only estimations of pigment layer position and width. The multispectral technique avoids observer and illumination related bias in the pigment layer determination.
NASA Astrophysics Data System (ADS)
Zhang, Yuhuan; Li, Zhengqiang; Zhang, Ying; Hou, Weizhen; Xu, Hua; Chen, Cheng; Ma, Yan
2014-01-01
The Geostationary Ocean Color Imager (GOCI) provides multispectral imagery of the East Asia region hourly from 9:00 to 16:00 local time (GMT+9) and collects multispectral imagery at eight spectral channels (412, 443, 490, 555, 660, 680, 745, and 865 nm) with a spatial resolution of 500 m. Thus, this technology brings significant advantages to high temporal resolution environmental monitoring. We present the retrieval of aerosol optical depth (AOD) in northern China based on GOCI data. Cross-calibration was performed against Moderate Resolution Imaging Spectrometer (MODIS) data in order to correct the land calibration bias of the GOCI sensor. AOD retrievals were then accomplished using a look-up table (LUT) strategy with assumptions of a quickly varying aerosol and a slowly varying surface with time. The AOD retrieval algorithm calculates AOD by minimizing the surface reflectance variations of a series of observations in a short period of time, such as several days. The monitoring of hourly AOD variations was implemented, and the retrieved AOD agreed well with AErosol RObotic NETwork (AERONET) ground-based measurements with a good R2 of approximately 0.74 at validation sites at the cities of Beijing and Xianghe, although intercept bias may be high in specific cases. The comparisons with MODIS products also show a good agreement in AOD spatial distribution. This work suggests that GOCI imagery can provide high temporal resolution monitoring of atmospheric aerosols over land, which is of great interest in climate change studies and environmental monitoring.
Cassini Imaging Science: First Results at Saturn
NASA Astrophysics Data System (ADS)
Porco, C. C.
The Cassini Imaging Science experiment at Saturn will commence in early February, 2004 -- five months before Cassini's arrival at Saturn. Approach observations consist of repeated multi-spectral `movie' sequences of Saturn and its rings, image sequences designed to search for previously unseen satellites between the outer edge of the ring system and the orbit of Hyperion, images of known satellites for orbit refinement, observations of Phoebe during Cassini's closest approach to the satellite, and repeated multi-spectral `movie' sequences of Titan to detect and track clouds (for wind determination) and to sense the surface. During Saturn Orbit Insertion, the highest resolution images (~ 100 m) obtained during the whole orbital tour will be collected of the dark side of the rings. Finally, imaging sequences are planned for Cassini's first Titan flyby, on July 2, from a distance of ~ 350,000 km, yielding an image scale of ~ 2.1 km on the South polar region. The highlights of these observation sequences will be presented.
Globally scalable generation of high-resolution land cover from multispectral imagery
NASA Astrophysics Data System (ADS)
Stutts, S. Craig; Raskob, Benjamin L.; Wenger, Eric J.
2017-05-01
We present an automated method of generating high resolution ( 2 meter) land cover using a pattern recognition neural network trained on spatial and spectral features obtained from over 9000 WorldView multispectral images (MSI) in six distinct world regions. At this resolution, the network can classify small-scale objects such as individual buildings, roads, and irrigation ponds. This paper focuses on three key areas. First, we describe our land cover generation process, which involves the co-registration and aggregation of multiple spatially overlapping MSI, post-aggregation processing, and the registration of land cover to OpenStreetMap (OSM) road vectors using feature correspondence. Second, we discuss the generation of land cover derivative products and their impact in the areas of region reduction and object detection. Finally, we discuss the process of globally scaling land cover generation using cloud computing via Amazon Web Services (AWS).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ban, H. Y.; Kavuri, V. C., E-mail: venk@physics.up
Purpose: The authors introduce a state-of-the-art all-optical clinical diffuse optical tomography (DOT) imaging instrument which collects spatially dense, multispectral, frequency-domain breast data in the parallel-plate geometry. Methods: The instrument utilizes a CCD-based heterodyne detection scheme that permits massively parallel detection of diffuse photon density wave amplitude and phase for a large number of source–detector pairs (10{sup 6}). The stand-alone clinical DOT instrument thus offers high spatial resolution with reduced crosstalk between absorption and scattering. Other novel features include a fringe profilometry system for breast boundary segmentation, real-time data normalization, and a patient bed design which permits both axial and sagittalmore » breast measurements. Results: The authors validated the instrument using tissue simulating phantoms with two different chromophore-containing targets and one scattering target. The authors also demonstrated the instrument in a case study breast cancer patient; the reconstructed 3D image of endogenous chromophores and scattering gave tumor localization in agreement with MRI. Conclusions: Imaging with a novel parallel-plate DOT breast imager that employs highly parallel, high-resolution CCD detection in the frequency-domain was demonstrated.« less
NASA Astrophysics Data System (ADS)
Bittel, Amy M.; Saldivar, Isaac S.; Nan, Xiaolin; Gibbs, Summer L.
2016-02-01
Single-molecule localization microscopy (SMLM) utilizes photoswitchable fluorophores to detect biological entities with 10-20 nm resolution. Multispectral superresolution microscopy (MSSRM) extends SMLM functionality by improving its spectral resolution up to 5 fold facilitating imaging of multicomponent cellular structures or signaling pathways. Current commercial fluorophores are not ideal for MSSRM as they are not designed to photoswitch and do not adequately cover the visible and far-red spectral regions required for MSSRM imaging. To obtain optimal MSSRM spatial and spectral resolution, fluorophores with narrow emission spectra and controllable photoswitching properties are necessary. Herein, a library of BODIPY-based fluorophores was synthesized and characterized to create optimal photoswitchable fluorophores for MSSRM. BODIPY was chosen as the core structure as it is photostable, has high quantum yield, and controllable photoswitching. The BODIPY core was modified through the addition of various aromatic moieties, resulting in a spectrally diverse library. Photoswitching properties were characterized using a novel polyvinyl alcohol (PVA) based film methodology to isolate single molecules. The PVA film methodology enabled photoswitching assessment without the need for protein conjugation, greatly improving screening efficiency of the BODIPY library. Additionally, image buffer conditions were optimized for the BODIPY-based fluorophores through systematic testing of oxygen scavenger systems, redox components, and additives. Through screening the photoswitching properties of BODIPY-based compounds in PVA films with optimized imaging buffer we identified novel fluorophores well suited for SMLM and MSSRM.
NASA Astrophysics Data System (ADS)
Bell, J. F.; Fraeman, A. A.; Grossman, L.; Herkenhoff, K. E.; Sullivan, R. J.; Mer/Athena Science Team
2010-12-01
The Mars Exploration Rovers Spirit and Opportunity have enabled more than six and a half years of detailed, in situ field study of two specific landing sites and traverse paths within Gusev crater and Meridiani Planum, respectively. Much of the study has relied on high-resolution, multispectral imaging of fine-grained regolith components--the dust, sand, cobbles, clasts, and other components collectively referred to as "soil"--at both sites using the rovers' Panoramic Camera (Pancam) and Microscopic Imager (MI) imaging systems. As of early September 2010, the Pancam systems have acquired more than 1300 and 1000 "13 filter" multispectral imaging sequences of surfaces in Gusev and Meridiani, respectively, with each sequence consisting of co-located images at 11 unique narrowband wavelengths between 430 nm and 1009 nm and having a maximum spatial resolution of about 500 microns per pixel. The MI systems have acquired more than 5900 and 6500 monochromatic images, respectively, at about 31 microns per pixel scale. Pancam multispectral image cubes are calibrated to radiance factor (I/F, where I is the measured radiance and π*F is the incident solar irradiance) using observations of the onboard calibration targets, and then corrected to relative reflectance (assuming Lambertian photometric behavior) for comparison with laboratory rock and mineral measurements. Specifically, Pancam spectra can be used to detect the possible presence of some iron-bearing minerals (e.g., some ferric oxides/oxyhydroxides and pyroxenes) as well as structural water or OH in some hydrated alteration products, providing important inputs on the choice of targets for more quantitative compositional and mineralogic follow-up using the rover's other in situ and remote sensing analysis tools. Pancam 11-band spectra are being analyzed using a variety of standard as well as specifically-tailored analysis methods, including color ratio and band depth parameterizations, spectral similarity and principal components clustering, and simple visual inspection based on correlations with false color unit boundaries and textural variations seen in both Pancam and MI imaging. Approximately 20 distinct spectral classes of fine-grained surface components were identified at each site based on these methods. In this presentation we describe these spectral classes, their geologic and textural context and distribution based on supporting high-res MI and other Pancam imaging, and their potential compositional/mineralogic interpretations based on a variety of rover data sets.
Development of a multispectral autoradiography using a coded aperture
NASA Astrophysics Data System (ADS)
Noto, Daisuke; Takeda, Tohoru; Wu, Jin; Lwin, Thet T.; Yu, Quanwen; Zeniya, Tsutomu; Yuasa, Tetsuya; Hiranaka, Yukio; Itai, Yuji; Akatsuka, Takao
2000-11-01
Autoradiography is a useful imaging technique to understand biological functions using tracers including radio isotopes (RI's). However, it is not easy to describe the distribution of different kinds of tracers simultaneously by conventional autoradiography using X-ray film or Imaging plate. Each tracer describes each corresponding biological function. Therefore, if we can simultaneously estimate distribution of different kinds of tracer materials, the multispectral autoradiography must be a quite powerful tool to better understand physiological mechanisms of organs. So we are developing a system using a solid state detector (SSD) with high energy- resolution. Here, we introduce an imaging technique with a coded aperture to get spatial and spectral information more efficiently. In this paper, the imaging principle is described, and its validity and fundamental property are discussed by both simulation and phantom experiments with RI's such as 201Tl, 99mTc, 67Ga, and 123I.
NASA Astrophysics Data System (ADS)
Nguyen, Hoang Hai; Tran, Hien; Sunwoo, Wooyeon; Yi, Jong-hyuk; Kim, Dongkyun; Choi, Minha
2017-04-01
A series of multispectral high-resolution Korean Multi-Purpose Satellite (KOMPSAT) images was used to detect the geographical changes in four different tidal flats between the Yellow Sea and the west coast of South Korea. The method of unsupervised classification was used to generate a series of land use/land cover (LULC) maps from satellite images, which were then used as input for temporal trajectory analysis to detect the temporal change of coastal wetlands and its association with natural and anthropogenic activities. The accurately classified LULC maps of KOMPSAT images, with overall accuracy ranging from 83.34% to 95.43%, indicate that these multispectral high-resolution satellite data are highly applicable to the generation of high-quality thematic maps for extracting wetlands. The result of the trajectory analysis showed that, while the variation of the tidal flats in the Gyeonggi and Jeollabuk provinces was well correlated with the regular tidal regimes, the reductive trajectory of the wetland areas belonging to the Saemangeum province was caused by a high degree of human-induced activities including large reclamation and urbanization. The conservation of the Jeungdo Wetland Protected Area in the Jeollanam province revealed that effective social and environmental policies could help in protecting coastal wetlands from degradation.
NASA Technical Reports Server (NTRS)
Holekamp, Kara; Aaron, David; Thome, Kurtis
2006-01-01
Radiometric calibration of commercial imaging satellite products is required to ensure that science and application communities can better understand their properties. Inaccurate radiometric calibrations can lead to erroneous decisions and invalid conclusions and can limit intercomparisons with other systems. To address this calibration need, satellite at-sensor radiance values were compared to those estimated by each independent team member to determine the sensor's radiometric accuracy. The combined results of this evaluation provide the user community with an independent assessment of these commercially available high spatial resolution sensors' absolute calibration values.
Ground truth spectrometry and imagery of eruption clouds to maximize utility of satellite imagery
NASA Technical Reports Server (NTRS)
Rose, William I.
1993-01-01
Field experiments with thermal imaging infrared radiometers were performed and a laboratory system was designed for controlled study of simulated ash clouds. Using AVHRR (Advanced Very High Resolution Radiometer) thermal infrared bands 4 and 5, a radiative transfer method was developed to retrieve particle sizes, optical depth and particle mass involcanic clouds. A model was developed for measuring the same parameters using TIMS (Thermal Infrared Multispectral Scanner), MODIS (Moderate Resolution Imaging Spectrometer), and ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer). Related publications are attached.
NASA Astrophysics Data System (ADS)
King, Michael D.; Tsay, Si-Chee; Ackerman, Steven A.; Larsen, North F.
1998-12-01
A multispectral scanning spectrometer was used to obtain measurements of the reflection function and brightness temperature of smoke, clouds, and terrestrial surfaces at 50 discrete wavelengths between 0.55 and 14.2 μm. These observations were obtained from the NASA ER-2 aircraft as part of the Smoke, Clouds, and Radiation-Brazil (SCAR-B) campaign, conducted over a 1500×1500 km region of cerrado and rain forest throughout Brazil between August 16 and September 11, 1995. Multispectral images of the reflection function and brightness temperature in 10 distinct bands of the MODIS airborne simulator (MAS) were used to derive a confidence in clear sky (or alternatively the probability of cloud), shadow, fire, and heavy aerosol. In addition to multispectral imagery, monostatic lidar data were obtained along the nadir ground track of the aircraft and used to assess the accuracy of the cloud mask results. This analysis shows that the cloud and aerosol mask being developed for operational use on the moderate-resolution imaging spectroradiometer (MODIS), and tested using MAS data in Brazil, is quite capable of separating cloud, aerosol, shadow, and fires during daytime conditions over land.
NASA Astrophysics Data System (ADS)
Dechesne, Clément; Mallet, Clément; Le Bris, Arnaud; Gouet-Brunet, Valérie
2017-04-01
Forest stands are the basic units for forest inventory and mapping. Stands are defined as large forested areas (e.g., ⩾ 2 ha) of homogeneous tree species composition and age. Their accurate delineation is usually performed by human operators through visual analysis of very high resolution (VHR) infra-red images. This task is tedious, highly time consuming, and should be automated for scalability and efficient updating purposes. In this paper, a method based on the fusion of airborne lidar data and VHR multispectral images is proposed for the automatic delineation of forest stands containing one dominant species (purity superior to 75%). This is the key preliminary task for forest land-cover database update. The multispectral images give information about the tree species whereas 3D lidar point clouds provide geometric information on the trees and allow their individual extraction. Multi-modal features are computed, both at pixel and object levels: the objects are individual trees extracted from lidar data. A supervised classification is then performed at the object level in order to coarsely discriminate the existing tree species in each area of interest. The classification results are further processed to obtain homogeneous areas with smooth borders by employing an energy minimum framework, where additional constraints are joined to form the energy function. The experimental results show that the proposed method provides very satisfactory results both in terms of stand labeling and delineation (overall accuracy ranges between 84 % and 99 %).
NASA Astrophysics Data System (ADS)
Ozendi, Mustafa; Topan, Hüseyin; Cam, Ali; Bayık, Çağlar
2016-10-01
Recently two optical remote sensing satellites, RASAT and GÖKTÜRK-2, launched successfully by the Republic of Turkey. RASAT has 7.5 m panchromatic, and 15 m visible bands whereas GÖKTÜRK-2 has 2.5 m panchromatic and 5 m VNIR (Visible and Near Infrared) bands. These bands with various resolutions can be fused by pan-sharpening methods which is an important application area of optical remote sensing imagery. So that, the high geometric resolution of panchromatic band and the high spectral resolution of VNIR bands can be merged. In the literature there are many pan-sharpening methods. However, there is not a standard framework for quality investigation of pan-sharpened imagery. The aim of this study is to investigate pan-sharpening performance of RASAT and GÖKTÜRK-2 images. For this purpose, pan-sharpened images are generated using most popular pan-sharpening methods IHS, Brovey and PCA at first. This procedure is followed by quantitative evaluation of pan-sharpened images using Correlation Coefficient (CC), Root Mean Square Error (RMSE), Relative Average Spectral Error (RASE), Spectral Angle Mapper (SAM) and Erreur Relative Globale Adimensionnelle de Synthése (ERGAS) metrics. For generation of pan-sharpened images and computation of metrics SharpQ tool is used which is developed with MATLAB computing language. According to metrics, PCA derived pan-sharpened image is the most similar one to multispectral image for RASAT, and Brovey derived pan-sharpened image is the most similar one to multispectral image for GÖKTÜRK-2. Finally, pan-sharpened images are evaluated qualitatively in terms of object availability and completeness for various land covers (such as urban, forest and flat areas) by a group of operators who are experienced in remote sensing imagery.
Photogrammetric Processing Using ZY-3 Satellite Imagery
NASA Astrophysics Data System (ADS)
Kornus, W.; Magariños, A.; Pla, M.; Soler, E.; Perez, F.
2015-03-01
This paper evaluates the stereoscopic capacities of the Chinese sensor ZiYuan-3 (ZY-3) for the generation of photogrammetric products. The satellite was launched on January 9, 2012 and carries three high-resolution panchromatic cameras viewing in forward (22º), nadir (0º) and backward direction (-22º) and an infrared multi-spectral scanner (IRMSS), which is slightly looking forward (6º). The ground sampling distance (GSD) is 2.1m for the nadir image, 3.5m for the two oblique stereo images and 5.8m for the multispectral image. The evaluated ZY-3 imagery consists of a full set of threefold-stereo and a multi-spectral image covering an area of ca. 50km x 50km north-west of Barcelona, Spain. The complete photogrammetric processing chain was executed including image orientation, the generation of a digital surface model (DSM), radiometric image correction, pansharpening, orthoimage generation and digital stereo plotting. All 4 images are oriented by estimating affine transformation parameters between observed and nominal RPC (rational polynomial coefficients) image positions of 17 ground control points (GCP) and a subsequent calculation of refined RPC. From 10 independent check points RMS errors of 2.2m, 2.0m and 2.7m in X, Y and H are obtained. Subsequently, a DSM of 5m grid spacing is generated fully automatically. A comparison with the Lidar data results in an overall DSM accuracy of approximately 3m. In moderate and flat terrain higher accuracies in the order of 2.5m and better are achieved. In a next step orthoimages from the high resolution nadir image and the multispectral image are generated using the refined RPC geometry and the DSM. After radiometric corrections a fused high resolution colour orthoimage with 2.1m pixel size is created using an adaptive HSL method. The pansharpen process is performed after the individual geocorrection due to the different viewing angles between the two images. In a detailed analysis of the colour orthoimage artifacts are detected covering an area of 4691ha, corresponding to less than 2% of the imaged area. Most of the artifacts are caused by clouds (4614ha). A minor part (77ha) is affected by colour patch, stripping or blooming effects. For the final qualitative analysis on the usability of the ZY-3 imagery for stereo plotting purposes stereo combinations of the nadir and an oblique image are discarded, mainly due to the different pixel size, which produces difficulties in the stereoscopic vision and poor accuracy in positioning and measuring. With the two oblique images a level of detail equivalent to 1:25.000 scale is achieved for transport network, hydrography, vegetation and elements to model the terrain as break lines. For settlement, including buildings and other constructions a lower level of detail is achieved equivalent to 1:50.000 scale.
NASA Astrophysics Data System (ADS)
Cooper, Robert J.; Magee, Elliott; Everdell, Nick; Magazov, Salavat; Varela, Marta; Airantzis, Dimitrios; Gibson, Adam P.; Hebden, Jeremy C.
2014-05-01
We detail the design, construction and performance of the second generation UCL time-resolved optical tomography system, known as MONSTIR II. Intended primarily for the study of the newborn brain, the system employs 32 source fibres that sequentially transmit picosecond pulses of light at any four wavelengths between 650 and 900 nm. The 32 detector channels each contain an independent photo-multiplier tube and temporally correlated photon-counting electronics that allow the photon transit time between each source and each detector position to be measured with high temporal resolution. The system's response time, temporal stability, cross-talk, and spectral characteristics are reported. The efficacy of MONSTIR II is demonstrated by performing multi-spectral imaging of a simple phantom.
NASA Technical Reports Server (NTRS)
Kettig, R. L.
1975-01-01
A method of classification of digitized multispectral images is developed and experimentally evaluated on actual earth resources data collected by aircraft and satellite. The method is designed to exploit the characteristic dependence between adjacent states of nature that is neglected by the more conventional simple-symmetric decision rule. Thus contextual information is incorporated into the classification scheme. The principle reason for doing this is to improve the accuracy of the classification. For general types of dependence this would generally require more computation per resolution element than the simple-symmetric classifier. But when the dependence occurs in the form of redundance, the elements can be classified collectively, in groups, therby reducing the number of classifications required.
NASA Technical Reports Server (NTRS)
Farrar, Michael R.; Smith, Eric A.
1992-01-01
A method for enhancing the 19, 22, and 37 GHz measurements of the SSM/I (Special Sensor Microwave/Imager) to the spatial resolution and sampling density of the high resolution 85-GHz channel is presented. An objective technique for specifying the tuning parameter, which balances the tradeoff between resolution and noise, is developed in terms of maximizing cross-channel correlations. Various validation procedures are performed to demonstrate the effectiveness of the method, which hopefully will provide researchers with a valuable tool in multispectral applications of satellite radiometer data.
Nanohole-array-based device for 2D snapshot multispectral imaging
Najiminaini, Mohamadreza; Vasefi, Fartash; Kaminska, Bozena; Carson, Jeffrey J. L.
2013-01-01
We present a two-dimensional (2D) snapshot multispectral imager that utilizes the optical transmission characteristics of nanohole arrays (NHAs) in a gold film to resolve a mixture of input colors into multiple spectral bands. The multispectral device consists of blocks of NHAs, wherein each NHA has a unique periodicity that results in transmission resonances and minima in the visible and near-infrared regions. The multispectral device was illuminated over a wide spectral range, and the transmission was spectrally unmixed using a least-squares estimation algorithm. A NHA-based multispectral imaging system was built and tested in both reflection and transmission modes. The NHA-based multispectral imager was capable of extracting 2D multispectral images representative of four independent bands within the spectral range of 662 nm to 832 nm for a variety of targets. The multispectral device can potentially be integrated into a variety of imaging sensor systems. PMID:24005065
NASA Technical Reports Server (NTRS)
1998-01-01
Positive Systems has worked in conjunction with Stennis Space Center to design the ADAR System 5500. This is a four-band airborne digital imaging system used to capture multispectral imagery similar to that available from satellite platforms such as Landsat, SPOT and the new generation of high resolution satellites. Positive Systems has provided remote sensing services for the development of digital aerial camera systems and software for commercial aerial imaging applications.
Evaluation of eelgrass beds mapping using a high-resolution airborne multispectral scanner
Su, H.; Karna, D.; Fraim, E.; Fitzgerald, M.; Dominguez, R.; Myers, J.S.; Coffland, B.; Handley, L.R.; Mace, T.
2006-01-01
Eelgrass (Zostera marina) can provide vital ecological functions in stabilizing sediments, influencing current dynamics, and contributing significant amounts of biomass to numerous food webs in coastal ecosystems. Mapping eelgrass beds is important for coastal water and nearshore estuarine monitoring, management, and planning. This study demonstrated the possible use of high spatial (approximately 5 m) and temporal (maximum low tide) resolution airborne multispectral scanner on mapping eelgrass beds in Northern Puget Sound, Washington. A combination of supervised and unsupervised classification approaches were performed on the multispectral scanner imagery. A normalized difference vegetation index (NDVI) derived from the red and near-infrared bands and ancillary spatial information, were used to extract and mask eelgrass beds and other submerged aquatic vegetation (SAV) in the study area. We evaluated the resulting thematic map (geocoded, classified image) against a conventional aerial photograph interpretation using 260 point locations randomly stratified over five defined classes from the thematic map. We achieved an overall accuracy of 92 percent with 0.92 Kappa Coefficient in the study area. This study demonstrates that the airborne multispectral scanner can be useful for mapping eelgrass beds in a local or regional scale, especially in regions for which optical remote sensing from space is constrained by climatic and tidal conditions. ?? 2006 American Society for Photogrammetry and Remote Sensing.
The instrument development status of hyper-spectral imager suite (HISUI)
NASA Astrophysics Data System (ADS)
Itoh, Yoshiyuki; Kawashima, Takahiro; Inada, Hitomi; Tanii, Jun; Iwasaki, Akira
2012-11-01
The hyper-multi spectral mission named HISUI (Hyper-spectral Imager SUIte) is the next Japanese earth observation project. This project is the follow up mission of the Advanced Spaceborne Thermal Emission and reflection Radiometer (ASTER) and Advanced Land Imager (ALDS). HISUI is composed of hyperspectral radiometer with higher spectral resolution and multi-spectral radiometer with higher spatial resolution. The development of functional evaluation model was carried out to confirm the spectral and radiometric performance prior to the flight model manufacture phase. This model contains the VNIR and SWIR spectrograph, the VNIR and SWIR detector assemblies with a mechanical cooler for SWIR, signal processing circuit and on-board calibration source.
NASA Astrophysics Data System (ADS)
Onojeghuo, Alex Okiemute; Onojeghuo, Ajoke Ruth
2017-07-01
This study investigated the combined use of multispectral/hyperspectral imagery and LiDAR data for habitat mapping across parts of south Cumbria, North West England. The methodology adopted in this study integrated spectral information contained in pansharp QuickBird multispectral/AISA Eagle hyperspectral imagery and LiDAR-derived measures with object-based machine learning classifiers and ensemble analysis techniques. Using the LiDAR point cloud data, elevation models (such as the Digital Surface Model and Digital Terrain Model raster) and intensity features were extracted directly. The LiDAR-derived measures exploited in this study included Canopy Height Model, intensity and topographic information (i.e. mean, maximum and standard deviation). These three LiDAR measures were combined with spectral information contained in the pansharp QuickBird and Eagle MNF transformed imagery for image classification experiments. A fusion of pansharp QuickBird multispectral and Eagle MNF hyperspectral imagery with all LiDAR-derived measures generated the best classification accuracies, 89.8 and 92.6% respectively. These results were generated with the Support Vector Machine and Random Forest machine learning algorithms respectively. The ensemble analysis of all three learning machine classifiers for the pansharp QuickBird and Eagle MNF fused data outputs did not significantly increase the overall classification accuracy. Results of the study demonstrate the potential of combining either very high spatial resolution multispectral or hyperspectral imagery with LiDAR data for habitat mapping.
NASA Astrophysics Data System (ADS)
Piermattei, Livia; Bozzi, Carlo Alberto; Mancini, Adriano; Tassetti, Anna Nora; Karel, Wilfried; Pfeifer, Norbert
2017-04-01
Unmanned aerial vehicles (UAVs) in combination with consumer grade cameras have become standard tools for photogrammetric applications and surveying. The recent generation of multispectral, cost-efficient and lightweight cameras has fostered a breakthrough in the practical application of UAVs for precision agriculture. For this application, multispectral cameras typically use Green, Red, Red-Edge (RE) and Near Infrared (NIR) wavebands to capture both visible and invisible images of crops and vegetation. These bands are very effective for deriving characteristics like soil productivity, plant health and overall growth. However, the quality of results is affected by the sensor architecture, the spatial and spectral resolutions, the pattern of image collection, and the processing of the multispectral images. In particular, collecting data with multiple sensors requires an accurate spatial co-registration of the various UAV image datasets. Multispectral processed data in precision agriculture are mainly presented as orthorectified mosaics used to export information maps and vegetation indices. This work aims to investigate the acquisition parameters and processing approaches of this new type of image data in order to generate orthoimages using different sensors and UAV platforms. Within our experimental area we placed a grid of artificial targets, whose position was determined with differential global positioning system (dGPS) measurements. Targets were used as ground control points to georeference the images and as checkpoints to verify the accuracy of the georeferenced mosaics. The primary aim is to present a method for the spatial co-registration of visible, Red-Edge, and NIR image sets. To demonstrate the applicability and accuracy of our methodology, multi-sensor datasets were collected over the same area and approximately at the same time using the fixed-wing UAV senseFly "eBee". The images were acquired with the camera Canon S110 RGB, the multispectral cameras Canon S110 NIR and S110 RE and with the multi-camera system Parrot Sequoia, which is composed of single-band cameras (Green, Red, Red Edge, NIR and RGB). Imagery from each sensor was georeferenced and mosaicked with the commercial software Agisoft PhotoScan Pro and different approaches for image orientation were compared. To assess the overall spatial accuracy of each dataset the root mean square error was computed between check point coordinates measured with dGPS and coordinates retrieved from georeferenced image mosaics. Additionally, image datasets from different UAV platforms (i.e. DJI Phantom 4Pro, DJI Phantom 3 professional, and DJI Inspire 1 Pro) were acquired over the same area and the spatial accuracy of the orthoimages was evaluated.
Multispectral atmospheric mapping sensor of mesoscale water vapor features
NASA Technical Reports Server (NTRS)
Menzel, P.; Jedlovec, G.; Wilson, G.; Atkinson, R.; Smith, W.
1985-01-01
The Multispectral atmospheric mapping sensor was checked out for specified spectral response and detector noise performance in the eight visible and three infrared (6.7, 11.2, 12.7 micron) spectral bands. A calibration algorithm was implemented for the infrared detectors. Engineering checkout flights on board the ER-2 produced imagery at 50 m resolution in which water vapor features in the 6.7 micron spectral band are most striking. These images were analyzed on the Man computer Interactive Data Access System (McIDAS). Ground truth and ancillary data was accessed to verify the calibration.
Calibration of the Multi-Spectral Solar Telescope Array multilayer mirrors and XUV filters
NASA Technical Reports Server (NTRS)
Allen, Maxwell J.; Willis, Thomas D.; Kankelborg, Charles C.; O'Neal, Ray H.; Martinez-Galarce, Dennis S.; Deforest, Craig E.; Jackson, Lisa; Lindblom, Joakim; Walker, Arthur B. C., Jr.; Barbee, Troy W., Jr.
1993-01-01
The Multi-Spectral Solar Telescope Array (MSSTA), a rocket-borne solar observatory, was successfully flown in May, 1991, obtaining solar images in eight XUV and FUV bands with 12 compact multilayer telescopes. Extensive measurements have recently been carried out on the multilayer telescopes and thin film filters at the Stanford Synchrotron Radiation Laboratory. These measurements are the first high spectral resolution calibrations of the MSSTA instruments. Previous measurements and/or calculations of telescope throughputs have been confirmed with greater accuracy. Results are presented on Mo/Si multilayer bandpass changes with time and experimental potassium bromide and tellurium filters.
Forest Stand Segmentation Using Airborne LIDAR Data and Very High Resolution Multispectral Imagery
NASA Astrophysics Data System (ADS)
Dechesne, Clément; Mallet, Clément; Le Bris, Arnaud; Gouet, Valérie; Hervieu, Alexandre
2016-06-01
Forest stands are the basic units for forest inventory and mapping. Stands are large forested areas (e.g., ≥ 2 ha) of homogeneous tree species composition. The accurate delineation of forest stands is usually performed by visual analysis of human operators on very high resolution (VHR) optical images. This work is highly time consuming and should be automated for scalability purposes. In this paper, a method based on the fusion of airborne laser scanning data (or lidar) and very high resolution multispectral imagery for automatic forest stand delineation and forest land-cover database update is proposed. The multispectral images give access to the tree species whereas 3D lidar point clouds provide geometric information on the trees. Therefore, multi-modal features are computed, both at pixel and object levels. The objects are individual trees extracted from lidar data. A supervised classification is performed at the object level on the computed features in order to coarsely discriminate the existing tree species in the area of interest. The analysis at tree level is particularly relevant since it significantly improves the tree species classification. A probability map is generated through the tree species classification and inserted with the pixel-based features map in an energetical framework. The proposed energy is then minimized using a standard graph-cut method (namely QPBO with α-expansion) in order to produce a segmentation map with a controlled level of details. Comparison with an existing forest land cover database shows that our method provides satisfactory results both in terms of stand labelling and delineation (matching ranges between 94% and 99%).
CRISM's Global Mapping of Mars, Part 1
NASA Technical Reports Server (NTRS)
2007-01-01
After a year in Mars orbit, CRISM has taken enough images to allow the team to release the first parts of a global spectral map of Mars to the Planetary Data System (PDS), NASA's digital library of planetary data. CRISM's global mapping is called the 'multispectral survey.' The team uses the word 'survey' because a reason for gathering this data set is to search for new sites for targeted observations, high-resolution views of the surface at 18 meters per pixel in 544 colors. Another reason for the multispectral survey is to provide contextual information. Targeted observations have such a large data volume (about 200 megabytes apiece) that only about 1% of Mars can be imaged at CRISM's highest resolution. The multispectral survey is a lower data volume type of observation that fills in the gaps between targeted observations, allowing scientists to better understand their geologic context. The global map is built from tens of thousands of image strips each about 10 kilometers (6.2 miles) wide and thousands of kilometers long. During the multispectral survey, CRISM returns data from only 72 carefully selected wavelengths that cover absorptions indicative of the mineral groups that CRISM is looking for on Mars. Data volume is further decreased by binning image pixels inside the instrument to a scale of about 200 meters (660 feet) per pixel. The total reduction in data volume per square kilometer is a factor of 700, making the multispectral survey manageable to acquire and transmit to Earth. Once on the ground, the strips of data are mosaicked into maps. The multispectral survey is too large to show the whole planet in a single map, so the map is divided into 1,964 'tiles,' each about 300 kilometers (186 miles) across. There are three versions of each tile, processed to progressively greater levels to strip away the obscuring effects of the dusty atmosphere and to highlight mineral variations in surface materials. This is the first version of tile 750, one of 209 tiles just delivered to the PDS. It shows a part of the planet called Tyrrhena Terra in the ancient, heavily cratered highlands. The colored strips are CRISM multispectral survey data acquired over several months, in which each pixel has a calibrated 72-color spectrum of Mars. The three wavelengths shown are 2.53, 1.50, and 1.08 micrometers in the red, green, and blue image planes respectively. At these wavelengths, rocky areas appear brown, dusty areas appear tan, and regions with hazy atmosphere appear bluish. Note that there is a large difference in brightness between strips, because there is no correction for the lighting conditions at the time of each observation. The gray areas between the strips are from an earlier mosaic of the planet taken by the Thermal Emission Imaging System (THEMIS) instrument on Mars Odyssey, and are included only for context. Ultimately the multispectral survey will cover nearly all of this area. CRISM is one of six science instruments on NASA's Mars Reconnaissance Orbiter. Led by The Johns Hopkins University Applied Physics Laboratory, Laurel, Md., the CRISM team includes expertise from universities, government agencies and small businesses in the United States and abroad. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter and the Mars Science Laboratory for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, built the orbiter.CRISM's Global Mapping of Mars, Part 2
NASA Technical Reports Server (NTRS)
2007-01-01
After a year in Mars orbit, CRISM has taken enough images to allow the team to release the first parts of a global spectral map of Mars to the Planetary Data System (PDS), NASA's digital library of planetary data. CRISM's global mapping is called the 'multispectral survey.' The team uses the word 'survey' because a reason for gathering this data set is to search for new sites for targeted observations, high-resolution views of the surface at 18 meters per pixel in 544 colors. Another reason for the multispectral survey is to provide contextual information. Targeted observations have such a large data volume (about 200 megabytes apiece) that only about 1% of Mars can be imaged at CRISM's highest resolution. The multispectral survey is a lower data volume type of observation that fills in the gaps between targeted observations, allowing scientists to better understand their geologic context. The global map is built from tens of thousands of image strips each about 10 kilometers (6.2 miles) wide and thousands of kilometers long. During the multispectral survey, CRISM returns data from only 72 carefully selected wavelengths that cover absorptions indicative of the mineral groups that CRISM is looking for on Mars. Data volume is further decreased by binning image pixels inside the instrument to a scale of about 200 meters (660 feet) per pixel. The total reduction in data volume per square kilometer is a factor of 700, making the multispectral survey manageable to acquire and transmit to Earth. Once on the ground, the strips of data are mosaicked into maps. The multispectral survey is too large to show the whole planet in a single map, so the map is divided into 1,964 'tiles,' each about 300 kilometers (186 miles) across. There are three versions of each tile, processed to progressively greater levels to strip away the obscuring effects of the dusty atmosphere and to highlight mineral variations in surface materials. This is the first version of tile 750, one of 209 tiles just delivered to the PDS. It shows a part of the planet called Tyrrhena Terra in the ancient, heavily cratered highlands. The colored strips are CRISM multispectral survey data acquired over several months, in which each pixel has a calibrated 72-color spectrum of Mars. The three wavelengths shown are 2.53, 1.50, and 1.08 micrometers in the red, green, and blue image planes respectively. At these wavelengths, rocky areas appear brown, dusty areas appear tan, and regions with hazy atmosphere appear bluish. Note that there is a large difference in brightness between strips, because there is no correction for the lighting conditions at the time of each observation. The gray areas between the strips are from an earlier mosaic of the planet taken by the Thermal Emission Imaging System (THEMIS) instrument on Mars Odyssey, and are included only for context. Ultimately the multispectral survey will cover nearly all of this area. CRISM is one of six science instruments on NASA's Mars Reconnaissance Orbiter. Led by The Johns Hopkins University Applied Physics Laboratory, Laurel, Md., the CRISM team includes expertise from universities, government agencies and small businesses in the United States and abroad. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter and the Mars Science Laboratory for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, built the orbiter.Bryson, Mitch; Johnson-Roberson, Matthew; Murphy, Richard J; Bongiorno, Daniel
2013-01-01
Intertidal ecosystems have primarily been studied using field-based sampling; remote sensing offers the ability to collect data over large areas in a snapshot of time that could complement field-based sampling methods by extrapolating them into the wider spatial and temporal context. Conventional remote sensing tools (such as satellite and aircraft imaging) provide data at limited spatial and temporal resolutions and relatively high costs for small-scale environmental science and ecologically-focussed studies. In this paper, we describe a low-cost, kite-based imaging system and photogrammetric/mapping procedure that was developed for constructing high-resolution, three-dimensional, multi-spectral terrain models of intertidal rocky shores. The processing procedure uses automatic image feature detection and matching, structure-from-motion and photo-textured terrain surface reconstruction algorithms that require minimal human input and only a small number of ground control points and allow the use of cheap, consumer-grade digital cameras. The resulting maps combine imagery at visible and near-infrared wavelengths and topographic information at sub-centimeter resolutions over an intertidal shoreline 200 m long, thus enabling spatial properties of the intertidal environment to be determined across a hierarchy of spatial scales. Results of the system are presented for an intertidal rocky shore at Jervis Bay, New South Wales, Australia. Potential uses of this technique include mapping of plant (micro- and macro-algae) and animal (e.g. gastropods) assemblages at multiple spatial and temporal scales.
Bryson, Mitch; Johnson-Roberson, Matthew; Murphy, Richard J.; Bongiorno, Daniel
2013-01-01
Intertidal ecosystems have primarily been studied using field-based sampling; remote sensing offers the ability to collect data over large areas in a snapshot of time that could complement field-based sampling methods by extrapolating them into the wider spatial and temporal context. Conventional remote sensing tools (such as satellite and aircraft imaging) provide data at limited spatial and temporal resolutions and relatively high costs for small-scale environmental science and ecologically-focussed studies. In this paper, we describe a low-cost, kite-based imaging system and photogrammetric/mapping procedure that was developed for constructing high-resolution, three-dimensional, multi-spectral terrain models of intertidal rocky shores. The processing procedure uses automatic image feature detection and matching, structure-from-motion and photo-textured terrain surface reconstruction algorithms that require minimal human input and only a small number of ground control points and allow the use of cheap, consumer-grade digital cameras. The resulting maps combine imagery at visible and near-infrared wavelengths and topographic information at sub-centimeter resolutions over an intertidal shoreline 200 m long, thus enabling spatial properties of the intertidal environment to be determined across a hierarchy of spatial scales. Results of the system are presented for an intertidal rocky shore at Jervis Bay, New South Wales, Australia. Potential uses of this technique include mapping of plant (micro- and macro-algae) and animal (e.g. gastropods) assemblages at multiple spatial and temporal scales. PMID:24069206
Multispectral and geomorphic studies of processed Voyager 2 images of Europa
NASA Technical Reports Server (NTRS)
Meier, T. A.
1984-01-01
High resolution images of Europa taken by the Voyager 2 spacecraft were used to study a portion of Europa's dark lineations and the major white line feature Agenor Linea. Initial image processing of images 1195J2-001 (violet filter), 1198J2-001 (blue filter), 1201J2-001 (orange filter), and 1204J2-001 (ultraviolet filter) was performed at the U.S.G.S. Branch of Astrogeology in Flagstaff, Arizona. Processing was completed through the stages of image registration and color ratio image construction. Pixel printouts were used in a new technique of linear feature profiling to compensate for image misregistration through the mapping of features on the printouts. In all, 193 dark lineation segments were mapped and profiled. The more accurate multispectral data derived by this method was plotted using a new application of the ternary diagram, with orange, blue, and violet relative spectral reflectances serving as end members. Statistical techniques were then applied to the ternary diagram plots. The image products generated at LPI were used mainly to cross-check and verify the results of the ternary diagram analysis.
NASA Astrophysics Data System (ADS)
Teffahi, Hanane; Yao, Hongxun; Belabid, Nasreddine; Chaib, Souleyman
2018-02-01
The satellite images with very high spatial resolution have been recently widely used in image classification topic as it has become challenging task in remote sensing field. Due to a number of limitations such as the redundancy of features and the high dimensionality of the data, different classification methods have been proposed for remote sensing images classification particularly the methods using feature extraction techniques. This paper propose a simple efficient method exploiting the capability of extended multi-attribute profiles (EMAP) with sparse autoencoder (SAE) for remote sensing image classification. The proposed method is used to classify various remote sensing datasets including hyperspectral and multispectral images by extracting spatial and spectral features based on the combination of EMAP and SAE by linking them to kernel support vector machine (SVM) for classification. Experiments on new hyperspectral image "Huston data" and multispectral image "Washington DC data" shows that this new scheme can achieve better performance of feature learning than the primitive features, traditional classifiers and ordinary autoencoder and has huge potential to achieve higher accuracy for classification in short running time.
Stand-off detection of explosive particles by imaging Raman spectroscopy
NASA Astrophysics Data System (ADS)
Nordberg, Markus; Åkeson, Madeleine; Östmark, Henric; Carlsson, Torgny E.
2011-06-01
A multispectral imaging technique has been developed to detect and identify explosive particles, e.g. from a fingerprint, at stand-off distances using Raman spectroscopy. When handling IED's as well as other explosive devices, residues can easily be transferred via fingerprints onto other surfaces e.g. car handles, gear sticks and suite cases. By imaging the surface using multispectral imaging Raman technique the explosive particles can be identified and displayed using color-coding. The technique has been demonstrated by detecting fingerprints containing significant amounts of 2,4-dinitrotoulene (DNT), 2,4,6-trinitrotoulene (TNT) and ammonium nitrate at a distance of 12 m in less than 90 seconds (22 images × 4 seconds)1. For each measurement, a sequence of images, one image for each wave number, is recorded. The spectral data from each pixel is compared with reference spectra of the substances to be detected. The pixels are marked with different colors corresponding to the detected substances in the fingerprint. The system has now been further developed to become less complex and thereby less sensitive to the environment such as temperature fluctuations. The optical resolution has been improved to less than 70 μm measured at 546 nm wavelength. The total detection time is ranging from less then one minute to around five minutes depending on the size of the particles and how confident the identification should be. The results indicate a great potential for multi-spectral imaging Raman spectroscopy as a stand-off technique for detection of single explosive particles.
Integration of aerial remote sensing imaging data in a 3D-GIS environment
NASA Astrophysics Data System (ADS)
Moeller, Matthias S.
2003-03-01
For some years sensor systems have been available providing digital images of a new quality. Especially aerial stereo scanners acquire digital multispectral images with an extremely high ground resolution of about 0.10 - 0.15m and provide in addition a Digital Surface Models (DSM). These imaging products both can be used for a detailed monitoring at scales up to 1:500. The processed georeferenced multispectral orthoimages can be readily integrated into GIS making them useful for a number of applications. The DSM, derived from forward and backward facing sensors of an aerial imaging system provides a ground resolution of 0.5 m and can be used for 3D visualization purposes. In some cases it is essential, to store the ground elevation as a Digital Terrain Model (DTM) and also the height of 3-dimensional objects in a separated database. Existing automated algorithms do not work precise for the extraction of DTM from aerial scanner DSM. This paper presents a new approach which combines the visible image data and the DSM data for the generation of DTM with a reliable geometric accuracy. Already existing cadastral data can be used as a knowledge base for the extraction of building heights in cities. These elevation data is the essential source for a GIS based urban information system with a 3D visualization component.
Fernández-Guisuraga, José Manuel; Sanz-Ablanedo, Enoc; Suárez-Seoane, Susana; Calvo, Leonor
2018-02-14
This study evaluated the opportunities and challenges of using drones to obtain multispectral orthomosaics at ultra-high resolution that could be useful for monitoring large and heterogeneous burned areas. We conducted a survey using an octocopter equipped with a Parrot SEQUOIA multispectral camera in a 3000 ha framework located within the perimeter of a megafire in Spain. We assessed the quality of both the camera raw imagery and the multispectral orthomosaic obtained, as well as the required processing capability. Additionally, we compared the spatial information provided by the drone orthomosaic at ultra-high spatial resolution with another image provided by the WorldView-2 satellite at high spatial resolution. The drone raw imagery presented some anomalies, such as horizontal banding noise and non-homogeneous radiometry. Camera locations showed a lack of synchrony of the single frequency GPS receiver. The georeferencing process based on ground control points achieved an error lower than 30 cm in X-Y and lower than 55 cm in Z. The drone orthomosaic provided more information in terms of spatial variability in heterogeneous burned areas in comparison with the WorldView-2 satellite imagery. The drone orthomosaic could constitute a viable alternative for the evaluation of post-fire vegetation regeneration in large and heterogeneous burned areas.
2018-01-01
This study evaluated the opportunities and challenges of using drones to obtain multispectral orthomosaics at ultra-high resolution that could be useful for monitoring large and heterogeneous burned areas. We conducted a survey using an octocopter equipped with a Parrot SEQUOIA multispectral camera in a 3000 ha framework located within the perimeter of a megafire in Spain. We assessed the quality of both the camera raw imagery and the multispectral orthomosaic obtained, as well as the required processing capability. Additionally, we compared the spatial information provided by the drone orthomosaic at ultra-high spatial resolution with another image provided by the WorldView-2 satellite at high spatial resolution. The drone raw imagery presented some anomalies, such as horizontal banding noise and non-homogeneous radiometry. Camera locations showed a lack of synchrony of the single frequency GPS receiver. The georeferencing process based on ground control points achieved an error lower than 30 cm in X-Y and lower than 55 cm in Z. The drone orthomosaic provided more information in terms of spatial variability in heterogeneous burned areas in comparison with the WorldView-2 satellite imagery. The drone orthomosaic could constitute a viable alternative for the evaluation of post-fire vegetation regeneration in large and heterogeneous burned areas. PMID:29443914
River velocities from sequential multispectral remote sensing images
NASA Astrophysics Data System (ADS)
Chen, Wei; Mied, Richard P.
2013-06-01
We address the problem of extracting surface velocities from a pair of multispectral remote sensing images over rivers using a new nonlinear multiple-tracer form of the global optimal solution (GOS). The derived velocity field is a valid solution across the image domain to the nonlinear system of equations obtained by minimizing a cost function inferred from the conservation constraint equations for multiple tracers. This is done by deriving an iteration equation for the velocity, based on the multiple-tracer displaced frame difference equations, and a local approximation to the velocity field. The number of velocity equations is greater than the number of velocity components, and thus overly constrain the solution. The iterative technique uses Gauss-Newton and Levenberg-Marquardt methods and our own algorithm of the progressive relaxation of the over-constraint. We demonstrate the nonlinear multiple-tracer GOS technique with sequential multispectral Landsat and ASTER images over a portion of the Potomac River in MD/VA, and derive a dense field of accurate velocity vectors. We compare the GOS river velocities with those from over 12 years of data at four NOAA reference stations, and find good agreement. We discuss how to find the appropriate spatial and temporal resolutions to allow optimization of the technique for specific rivers.
Intelligent image processing for vegetation classification using multispectral LANDSAT data
NASA Astrophysics Data System (ADS)
Santos, Stewart R.; Flores, Jorge L.; Garcia-Torales, G.
2015-09-01
We propose an intelligent computational technique for analysis of vegetation imaging, which are acquired with multispectral scanner (MSS) sensor. This work focuses on intelligent and adaptive artificial neural network (ANN) methodologies that allow segmentation and classification of spectral remote sensing (RS) signatures, in order to obtain a high resolution map, in which we can delimit the wooded areas and quantify the amount of combustible materials present into these areas. This could provide important information to prevent fires and deforestation of wooded areas. The spectral RS input data, acquired by the MSS sensor, are considered in a random propagation remotely sensed scene with unknown statistics for each Thematic Mapper (TM) band. Performing high-resolution reconstruction and adding these spectral values with neighbor pixels information from each TM band, we can include contextual information into an ANN. The biggest challenge in conventional classifiers is how to reduce the number of components in the feature vector, while preserving the major information contained in the data, especially when the dimensionality of the feature space is high. Preliminary results show that the Adaptive Modified Neural Network method is a promising and effective spectral method for segmentation and classification in RS images acquired with MSS sensor.
NASA Astrophysics Data System (ADS)
Yamaguchi, Masahiro; Haneishi, Hideaki; Fukuda, Hiroyuki; Kishimoto, Junko; Kanazawa, Hiroshi; Tsuchida, Masaru; Iwama, Ryo; Ohyama, Nagaaki
2006-01-01
In addition to the great advancement of high-resolution and large-screen imaging technology, the issue of color is now receiving considerable attention as another aspect than the image resolution. It is difficult to reproduce the original color of subject in conventional imaging systems, and that obstructs the applications of visual communication systems in telemedicine, electronic commerce, and digital museum. To breakthrough the limitation of conventional RGB 3-primary systems, "Natural Vision" project aims at an innovative video and still-image communication technology with high-fidelity color reproduction capability, based on spectral information. This paper summarizes the results of NV project including the development of multispectral and multiprimary imaging technologies and the experimental investigations on the applications to medicine, digital archives, electronic commerce, and computer graphics.
Measurement Sets and Sites Commonly Used for High Spatial Resolution Image Product Characterization
NASA Technical Reports Server (NTRS)
Pagnutti, Mary
2006-01-01
Scientists within NASA's Applied Sciences Directorate have developed a well-characterized remote sensing Verification & Validation (V&V) site at the John C. Stennis Space Center (SSC). This site has enabled the in-flight characterization of satellite high spatial resolution remote sensing system products form Space Imaging IKONOS, Digital Globe QuickBird, and ORBIMAGE OrbView, as well as advanced multispectral airborne digital camera products. SSC utilizes engineered geodetic targets, edge targets, radiometric tarps, atmospheric monitoring equipment and their Instrument Validation Laboratory to characterize high spatial resolution remote sensing data products. This presentation describes the SSC characterization capabilities and techniques in the visible through near infrared spectrum and examples of calibration results.
NASA Astrophysics Data System (ADS)
Kelly, M. A.; Boldt, J.; Wilson, J. P.; Yee, J. H.; Stoffler, R.
2017-12-01
The multi-spectral STereo Atmospheric Remote Sensing (STARS) concept has the objective to provide high-spatial and -temporal-resolution observations of 3D cloud structures related to hurricane development and other severe weather events. The rapid evolution of severe weather demonstrates a critical need for mesoscale observations of severe weather dynamics, but such observations are rare, particularly over the ocean where extratropical and tropical cyclones can undergo explosive development. Coincident space-based measurements of wind velocity and cloud properties at the mesoscale remain a great challenge, but are critically needed to improve the understanding and prediction of severe weather and cyclogenesis. STARS employs a mature stereoscopic imaging technique on two satellites (e.g. two CubeSats, two hosted payloads) to simultaneously retrieve cloud motion vectors (CMVs), cloud-top temperatures (CTTs), and cloud geometric heights (CGHs) from multi-angle, multi-spectral observations of cloud features. STARS is a pushbroom system based on separate wide-field-of-view co-boresighted multi-spectral cameras in the visible, midwave infrared (MWIR), and longwave infrared (LWIR) with high spatial resolution (better than 1 km). The visible system is based on a pan-chromatic, low-light imager to resolve cloud structures under nighttime illumination down to ¼ moon. The MWIR instrument, which is being developed as a NASA ESTO Instrument Incubator Program (IIP) project, is based on recent advances in MWIR detector technology that requires only modest cooling. The STARS payload provides flexible options for spaceflight due to its low size, weight, power (SWaP) and very modest cooling requirements. STARS also meets AF operational requirements for cloud characterization and theater weather imagery. In this paper, an overview of the STARS concept, including the high-level sensor design, the concept of operations, and measurement capability will be presented.
FFT-enhanced IHS transform method for fusing high-resolution satellite images
Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.
2007-01-01
Existing image fusion techniques such as the intensity-hue-saturation (IHS) transform and principal components analysis (PCA) methods may not be optimal for fusing the new generation commercial high-resolution satellite images such as Ikonos and QuickBird. One problem is color distortion in the fused image, which causes visual changes as well as spectral differences between the original and fused images. In this paper, a fast Fourier transform (FFT)-enhanced IHS method is developed for fusing new generation high-resolution satellite images. This method combines a standard IHS transform with FFT filtering of both the panchromatic image and the intensity component of the original multispectral image. Ikonos and QuickBird data are used to assess the FFT-enhanced IHS transform method. Experimental results indicate that the FFT-enhanced IHS transform method may improve upon the standard IHS transform and the PCA methods in preserving spectral and spatial information. ?? 2006 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS).
NASA Astrophysics Data System (ADS)
Zhu, Y.; Jin, S.; Tian, Y.; Wang, M.
2017-09-01
To meet the requirement of high accuracy and high speed processing for wide swath high resolution optical satellite imagery under emergency situation in both ground processing system and on-board processing system. This paper proposed a ROI-orientated sensor correction algorithm based on virtual steady reimaging model for wide swath high resolution optical satellite imagery. Firstly, the imaging time and spatial window of the ROI is determined by a dynamic search method. Then, the dynamic ROI sensor correction model based on virtual steady reimaging model is constructed. Finally, the corrected image corresponding to the ROI is generated based on the coordinates mapping relationship which is established by the dynamic sensor correction model for corrected image and rigours imaging model for original image. Two experimental results show that the image registration between panchromatic and multispectral images can be well achieved and the image distortion caused by satellite jitter can be also corrected efficiently.
Unmixing chromophores in human skin with a 3D multispectral optoacoustic mesoscopy system
NASA Astrophysics Data System (ADS)
Schwarz, Mathias; Aguirre, Juan; Soliman, Dominik; Buehler, Andreas; Ntziachristos, Vasilis
2016-03-01
The absorption of visible light by human skin is governed by a number of natural chromophores: Eumelanin, pheomelanin, oxyhemoglobin, and deoxyhemoglobin are the major absorbers in the visible range in cutaneous tissue. Label-free quantification of these tissue chromophores is an important step of optoacoustic (photoacoustic) imaging towards clinical application, since it provides relevant information in diseases. In tumor cells, for instance, there are metabolic changes (Warburg effect) compared to healthy cells, leading to changes in oxygenation in the environment of tumors. In malignant melanoma changes in the absorption spectrum have been observed compared to the spectrum of nonmalignant nevi. So far, optoacoustic imaging has been applied to human skin mostly in single-wavelength mode, providing anatomical information but no functional information. In this work, we excited the tissue by a tunable laser source in the spectral range from 413-680 nm with a repetition rate of 50 Hz. The laser was operated in wavelengthsweep mode emitting consecutive pulses at various wavelengths that allowed for automatic co-registration of the multispectral datasets. The multispectral raster-scan optoacoustic mesoscopy (MSOM) system provides a lateral resolution of <60 μm independent of wavelength. Based on the known absorption spectra of melanin, oxyhemoglobin, and deoxyhemoglobin, three-dimensional absorption maps of all three absorbers were calculated from the multispectral dataset.
Evaluation of airborne image data for mapping riparian vegetation within the Grand Canyon
Davis, Philip A.; Staid, Matthew I.; Plescia, Jeffrey B.; Johnson, Jeffrey R.
2002-01-01
This study examined various types of remote-sensing data that have been acquired during a 12-month period over a portion of the Colorado River corridor to determine the type of data and conditions for data acquisition that provide the optimum classification results for mapping riparian vegetation. Issues related to vegetation mapping included time of year, number and positions of wavelength bands, and spatial resolution for data acquisition to produce accurate vegetation maps versus cost of data. Image data considered in the study consisted of scanned color-infrared (CIR) film, digital CIR, and digital multispectral data, whose resolutions from 11 cm (photographic film) to 100 cm (multispectral), that were acquired during the Spring, Summer, and Fall seasons in 2000 for five long-term monitoring sites containing riparian vegetation. Results show that digitally acquired data produce higher and more consistent classification accuracies for mapping vegetation units than do film products. The highest accuracies were obtained from nine-band multispectral data; however, a four-band subset of these data, that did not include short-wave infrared bands, produced comparable mapping results. The four-band subset consisted of the wavelength bands 0.52-0.59 µm, 0.59-0.62 µm, 0.67-0.72 µm, and 0.73-0.85 µm. Use of only three of these bands that simulate digital CIR sensors produced accuracies for several vegetation units that were 10% lower than those obtained using the full multispectral data set. Classification tests using band ratios produced lower accuracies than those using band reflectance for scanned film data; a result attributed to the relatively poor radiometric fidelity maintained by the film scanning process, whereas calibrated multispectral data produced similar classification accuracies using band reflectance and band ratios. This suggests that the intrinsic band reflectance of the vegetation is more important than inter-band reflectance differences in attaining high mapping accuracies. These results also indicate that radiometrically calibrated sensors that record a wide range of radiance produce superior results and that such sensors should be used for monitoring purposes. When texture (spatial variance) at near-infrared wavelength is combined with spectral data in classification, accuracy increased most markedly (20-30%) for the highest resolution (11-cm) CIR film data, but decreased in its effect on accuracy in lower-resolution multi-spectral image data; a result observed in previous studies (Franklin and McDermid 1993, Franklin et al. 2000, 2001). While many classification unit accuracies obtained from the 11-cm film CIR band with texture data were in fact higher than those produced using the 100-cm, nine-band multispectral data with texture, the 11-cm film CIR data produced much lower accuracies than the 100-cm multispectral data for the more sparsely populated vegetation units due to saturation of picture elements during the film scanning process in vegetation units with a high proportion of alluvium. Overall classification accuracies obtained from spectral band and texture data range from 36% to 78% for all databases considered, from 57% to 71% for the 11-cm film CIR data, and from 54% to 78% for the 100-cm multispectral data. Classification results obtained from 20-cm film CIR band and texture data, which were produced by applying a Gaussian filter to the 11-cm film CIR data, showed increases in accuracy due to texture that were similar to those observed using the original 11-cm film CIR data. This suggests that data can be collected at the lower resolution and still retain the added power of vegetation texture. Classification accuracies for the riparian vegetation units examined in this study do not appear to be influenced by season of data acquisition, although data acquired under direct sunlight produced higher overall accuracies than data acquired under overcast conditions. The latter observation, in addition to the importance of band reflectance for classification, implies that data should be acquired near summer solstice when sun elevation and reflectance is highest and when shadows cast by steep canyon walls are minimized.
a New Object-Based Framework to Detect Shodows in High-Resolution Satellite Imagery Over Urban Areas
NASA Astrophysics Data System (ADS)
Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.
2015-12-01
In this paper a new object-based framework to detect shadow areas in high resolution satellite images is proposed. To produce shadow map in pixel level state of the art supervised machine learning algorithms are employed. Automatic ground truth generation based on Otsu thresholding on shadow and non-shadow indices is used to train the classifiers. It is followed by segmenting the image scene and create image objects. To detect shadow objects, a majority voting on pixel-based shadow detection result is designed. GeoEye-1 multi-spectral image over an urban area in Qom city of Iran is used in the experiments. Results shows the superiority of our proposed method over traditional pixel-based, visually and quantitatively.
Gimbaled multispectral imaging system and method
Brown, Kevin H.; Crollett, Seferino; Henson, Tammy D.; Napier, Matthew; Stromberg, Peter G.
2016-01-26
A gimbaled multispectral imaging system and method is described herein. In an general embodiment, the gimbaled multispectral imaging system has a cross support that defines a first gimbal axis and a second gimbal axis, wherein the cross support is rotatable about the first gimbal axis. The gimbaled multispectral imaging system comprises a telescope that fixed to an upper end of the cross support, such that rotation of the cross support about the first gimbal axis causes the tilt of the telescope to alter. The gimbaled multispectral imaging system includes optics that facilitate on-gimbal detection of visible light and off-gimbal detection of infrared light.
NASA Astrophysics Data System (ADS)
Birk, Udo; Szczurek, Aleksander; Cremer, Christoph
2017-12-01
Current approaches to overcome the conventional limit of the resolution potential of light microscopy (of about 200 nm for visible light), often suffer from non-linear effects, which render the quantification of the image intensities in the reconstructions difficult, and also affect the quantification of the biological structure under investigation. As an attempt to face these difficulties, we discuss a particular method of localization microscopy which is based on photostable fluorescent dyes. The proposed method can potentially be implemented as a fast alternative for quantitative localization microscopy, circumventing the need for the acquisition of thousands of image frames and complex, highly dye-specific imaging buffers. Although the need for calibration remains in order to extract quantitative data (such as the number of emitters), multispectral approaches are largely facilitated due to the much less stringent requirements on imaging buffers. Furthermore, multispectral acquisitions can be readily obtained using commercial instrumentation such as e.g. the conventional confocal laser scanning microscope.
NASA Astrophysics Data System (ADS)
Shi, Cheng; Liu, Fang; Li, Ling-Ling; Hao, Hong-Xia
2014-01-01
The goal of pan-sharpening is to get an image with higher spatial resolution and better spectral information. However, the resolution of the pan-sharpened image is seriously affected by the thin clouds. For a single image, filtering algorithms are widely used to remove clouds. These kinds of methods can remove clouds effectively, but the detail lost in the cloud removal image is also serious. To solve this problem, a pan-sharpening algorithm to remove thin cloud via mask dodging and nonsampled shift-invariant shearlet transform (NSST) is proposed. For the low-resolution multispectral (LR MS) and high-resolution panchromatic images with thin clouds, a mask dodging method is used to remove clouds. For the cloud removal LR MS image, an adaptive principal component analysis transform is proposed to balance the spectral information and spatial resolution in the pan-sharpened image. Since the clouds removal process causes the detail loss problem, a weight matrix is designed to enhance the details of the cloud regions in the pan-sharpening process, but noncloud regions remain unchanged. And the details of the image are obtained by NSST. Experimental results over visible and evaluation metrics demonstrate that the proposed method can keep better spectral information and spatial resolution, especially for the images with thin clouds.
Radiometric characterization of hyperspectral imagers using multispectral sensors
NASA Astrophysics Data System (ADS)
McCorkel, Joel; Thome, Kurt; Leisso, Nathan; Anderson, Nikolaus; Czapla-Myers, Jeff
2009-08-01
The Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne and satellite based sensors. Often, ground-truth measurements at these tests sites are not always successful due to weather and funding availability. Therefore, RSG has also employed automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor. This work studies the feasibility of determining the radiometric calibration of a hyperspectral imager using multispectral imagery. The work relies on the Moderate Resolution Imaging Spectroradiometer (MODIS) as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. Hyperion bands are compared to MODIS by band averaging Hyperion's high spectral resolution data with the relative spectral response of MODIS. The results compare cross-calibration scenarios that differ in image acquisition coincidence, test site used for the calibration, and reference sensor. Cross-calibration results are presented that show agreement between the use of coincident and non-coincident image pairs within 2% in most bands as well as similar agreement between results that employ the different MODIS sensors as a reference.
Radiometric Characterization of Hyperspectral Imagers using Multispectral Sensors
NASA Technical Reports Server (NTRS)
McCorkel, Joel; Kurt, Thome; Leisso, Nathan; Anderson, Nikolaus; Czapla-Myers, Jeff
2009-01-01
The Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne and satellite based sensors. Often, ground-truth measurements at these test sites are not always successful due to weather and funding availability. Therefore, RSG has also automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor, This work studies the feasibility of determining the radiometric calibration of a hyperspectral imager using multispectral a imagery. The work relies on the Moderate Resolution Imaging Spectroradiometer (M0DIS) as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. Hyperion bands are compared to MODIS by band averaging Hyperion's high spectral resolution data with the relative spectral response of M0DlS. The results compare cross-calibration scenarios that differ in image acquisition coincidence, test site used for the calibration, and reference sensor. Cross-calibration results are presented that show agreement between the use of coincident and non-coincident image pairs within 2% in most brands as well as similar agreement between results that employ the different MODIS sensors as a reference.
Lossless compression algorithm for multispectral imagers
NASA Astrophysics Data System (ADS)
Gladkova, Irina; Grossberg, Michael; Gottipati, Srikanth
2008-08-01
Multispectral imaging is becoming an increasingly important tool for monitoring the earth and its environment from space borne and airborne platforms. Multispectral imaging data consists of visible and IR measurements from a scene across space and spectrum. Growing data rates resulting from faster scanning and finer spatial and spectral resolution makes compression an increasingly critical tool to reduce data volume for transmission and archiving. Research for NOAA NESDIS has been directed to finding for the characteristics of satellite atmospheric Earth science Imager sensor data what level of Lossless compression ratio can be obtained as well as appropriate types of mathematics and approaches that can lead to approaching this data's entropy level. Conventional lossless do not achieve the theoretical limits for lossless compression on imager data as estimated from the Shannon entropy. In a previous paper, the authors introduce a lossless compression algorithm developed for MODIS as a proxy for future NOAA-NESDIS satellite based Earth science multispectral imagers such as GOES-R. The algorithm is based on capturing spectral correlations using spectral prediction, and spatial correlations with a linear transform encoder. In decompression, the algorithm uses a statistically computed look up table to iteratively predict each channel from a channel decompressed in the previous iteration. In this paper we present a new approach which fundamentally differs from our prior work. In this new approach, instead of having a single predictor for each pair of bands we introduce a piecewise spatially varying predictor which significantly improves the compression results. Our new algorithm also now optimizes the sequence of channels we use for prediction. Our results are evaluated by comparison with a state of the art wavelet based image compression scheme, Jpeg2000. We present results on the 14 channel subset of the MODIS imager, which serves as a proxy for the GOES-R imager. We will also show results of the algorithm for on NOAA AVHRR data and data from SEVIRI. The algorithm is designed to be adapted to the wide range of multispectral imagers and should facilitate distribution of data throughout globally. This compression research is managed by Roger Heymann, PE of OSD NOAA NESDIS Engineering, in collaboration with the NOAA NESDIS STAR Research Office through Mitch Goldberg, Tim Schmit, Walter Wolf.
NASA Technical Reports Server (NTRS)
Guenther, Bruce W. (Editor)
1991-01-01
Various papers on the calibration of passive remote observing optical and microwave instrumentation are presented. Individual topics addressed include: on-board calibration device for a wide field-of-view instrument, calibration for the medium-resolution imaging spectrometer, cryogenic radiometers and intensity-stabilized lasers for EOS radiometric calibrations, radiometric stability of the Shuttle-borne solar backscatter ultraviolet spectrometer, ratioing radiometer for use with a solar diffuser, requirements of a solar diffuser and measurements of some candidate materials, reflectance stability analysis of Spectralon diffuse calibration panels, stray light effects on calibrations using a solar diffuser, radiometric calibration of SPOT 23 HRVs, surface and aerosol models for use in radiative transfer codes. Also addressed are: calibrated intercepts for solar radiometers used in remote sensor calibration, radiometric calibration of an airborne multispectral scanner, in-flight calibration of a helicopter-mounted Daedalus multispectral scanner, technique for improving the calibration of large-area sphere sources, remote colorimetry and its applications, spatial sampling errors for a satellite-borne scanning radiometer, calibration of EOS multispectral imaging sensors and solar irradiance variability.
An algorithm for retrieving rock-desertification from multispectral remote sensing images
NASA Astrophysics Data System (ADS)
Xia, Xueqi; Tian, Qingjiu; Liao, Yan
2009-06-01
Rock-desertification is a typical environmental and ecological problem in Southwest China. As remote sensing is an important means of monitoring spatial variation of rock-desertification, a method is developed for measurement and information retrieval of rock-desertification from multi-spectral high-resolution remote sensing images. MNF transform is applied to 4-band IKONOS multi-spectral remotely sensed data to reduce the number of spectral dimensions to three. In the 3-demension endmembers are extracted and analyzed. It is found that various vegetations group into a line defined as "vegetation line", in which "dark vegetations", such as coniferous forest and broadleaf forest, continuously change to "bright vegetations", such as grasses. It is presumed that is caused by deferent proportion of shadow mixed in leaves or branches in various types of vegetation. Normalized distance between the endmember of rocks and the vegetation line is defined as Geometric Rock-desertification Index (GRI), which was used to scale rock-desertification. The case study with ground truth validation in Puding, Guizhou province showed successes and the advantages of this method.
NASA Astrophysics Data System (ADS)
Meyer, Hanna; Lehnert, Lukas W.; Wang, Yun; Reudenbach, Christoph; Nauss, Thomas; Bendix, Jörg
2016-04-01
Pastoralism is the dominant land-use on the Qinghai-Tibet-Plateau (QTP) providing the major economic resource for the local population. However, the pastures are highly supposed to be affected by ongoing degradation whose extent is still disputed. This study uses hyperspectral in situ measurements and multispectral satellite images to assess vegetation cover and above ground biomass (AGB) as proxies of pasture degradation on a regional scale. Using Random Forests in conjunction with recursive feature selection as modeling tool, it is tested whether the full hyperspectral information is needed or if multispectral information is sufficient to accurately estimate vegetation cover and AGB. To regionalize pasture degradation proxies, the transferability of the locally derived models to high resolution multispectral satellite data is assessed. For this purpose, 1183 hyperspectral measurements and vegetation records were sampled at 18 locations on the QTP. AGB was determined on 25 0.5x0.5m plots. Proxies for pasture degradation were derived from the spectra by calculating narrow-band indices (NBI). Using the NBI as predictor variables vegetation cover and AGB were modeled. Models were calculated using the hyperspectral data as well as the same data resampled to WorldView-2, QuickBird and RapidEye channels. The hyperspectral results were compared to the multispectral results. Finally, the models were applied to satellite data to map vegetation cover and AGB on a regional scale. Vegetation cover was accurately predicted by Random Forest if hyperspectral measurements were used. In contrast, errors in AGB estimations were considerably higher. Only small differences in accuracy were observed between the models based on hyper- compared to multispectral data. The application of the models to satellite images generally resulted in an increase of the estimation error. Though this reflects the challenge of applying in situ measurements to satellite data, the results still show a high potential to map pasture degradation proxies on the QTP even for larger scales.
Quasi-microscope concept for planetary missions.
Huck, F O; Arvidson, R E; Burcher, E E; Giat, O; Wall, S D
1977-09-01
Viking lander cameras have returned stereo and multispectral views of the Martian surface with a resolution that approaches 2 mm/lp in the near field. A two-orders-of-magnitude increase in resolution could be obtained for collected surface samples by augmenting these cameras with auxiliary optics that would neither impose special camera design requirements nor limit the cameras field of view of the terrain. Quasi-microscope images would provide valuable data on the physical and chemical characteristics of planetary regoliths.
Synergistic use of multispectral satellite data for monitoring land surface change
NASA Technical Reports Server (NTRS)
Choudhury, Bhaskar J.
1991-01-01
Observations by the Advanced Very High Resolution Radiometer (AVHRR) onboard the NOAA satellites were used to compute visible and near infrared reflectances and surface temperature, while passive microwave observations at 37 GHz frequency by the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave Imager (SSM/I) on board, respectively, the Nimbus-7 and DMSP-F8 satellites were used to compute polarization difference. These observations were analyzed along transects from rainforest to desert over northern Africa for the period 1979-1987, which included an unprecedented drought during 1984 over the Sahel zone. Model simulations were made to understand the interrelationship among multispectral data.
Novel instrumentation of multispectral imaging technology for detecting tissue abnormity
NASA Astrophysics Data System (ADS)
Yi, Dingrong; Kong, Linghua
2012-10-01
Multispectral imaging is becoming a powerful tool in a wide range of biological and clinical studies by adding spectral, spatial and temporal dimensions to visualize tissue abnormity and the underlying biological processes. A conventional spectral imaging system includes two physically separated major components: a band-passing selection device (such as liquid crystal tunable filter and diffraction grating) and a scientific-grade monochromatic camera, and is expensive and bulky. Recently micro-arrayed narrow-band optical mosaic filter was invented and successfully fabricated to reduce the size and cost of multispectral imaging devices in order to meet the clinical requirement for medical diagnostic imaging applications. However the challenging issue of how to integrate and place the micro filter mosaic chip to the targeting focal plane, i.e., the imaging sensor, of an off-shelf CMOS/CCD camera is not reported anywhere. This paper presents the methods and results of integrating such a miniaturized filter with off-shelf CMOS imaging sensors to produce handheld real-time multispectral imaging devices for the application of early stage pressure ulcer (ESPU) detection. Unlike conventional multispectral imaging devices which are bulky and expensive, the resulting handheld real-time multispectral ESPU detector can produce multiple images at different center wavelengths with a single shot, therefore eliminates the image registration procedure required by traditional multispectral imaging technologies.
NASA Astrophysics Data System (ADS)
Anderson, Neal T.; Marchisio, Giovanni B.
2012-06-01
Over the last decade DigitalGlobe (DG) has built and launched a series of remote sensing satellites with steadily increasing capabilities: QuickBird, WorldView-1 (WV-1), and WorldView-2 (WV-2). Today, this constellation acquires over 2.5 million km2 of imagery on a daily basis. This paper presents the configuration and performance capabilities of each of these satellites, with emphasis on the unique spatial and spectral capabilities of WV-2. WV-2 employs high-precision star tracker and inertial measurement units to achieve a geolocation accuracy of 5 m Circular Error, 90% confidence (CE90). The native resolution of WV-2 is 0.5 m GSD in the panchromatic band and 2 m GSD in 8 multispectral bands. Four of the multispectral bands match those of the Landsat series of satellites; four new bands enable novel and expanded applications. We are rapidly establishing and refreshing a global database of very high resolution (VHR) 8-band multispectral imagery. Control moment gyroscopes (CMGs) on both WV-1 and WV-2 improve collection capacity and provide the agility to capture multi-angle sequences in rapid succession. These capabilities result in a rich combination of image features that can be exploited to develop enhanced monitoring solutions. Algorithms for interpretation and analysis can leverage: 1) broader and more continuous spectral coverage at 2 m resolution; 2) textural and morphological information from the 0.5 m panchromatic band; 3) ancillary information from stereo and multi-angle collects, including high precision digital elevation models; 4) frequent revisits and time-series collects; and 5) the global reference image archives. We introduce the topic of creative fusion of image attributes, as this provides a unifying theme for many of the papers in this WV-2 Special Session.
2016-10-10
AFRL-RX-WP-JA-2017-0189 EXPERIMENTAL DEMONSTRATION OF ADAPTIVE INFRARED MULTISPECTRAL IMAGING USING PLASMONIC FILTER ARRAY...March 2016 – 23 May 2016 4. TITLE AND SUBTITLE EXPERIMENTAL DEMONSTRATION OF ADAPTIVE INFRARED MULTISPECTRAL IMAGING USING PLASMONIC FILTER ARRAY...experimental demonstration of adaptive multispectral imagery using fabricated plasmonic spectral filter arrays and proposed target detection scenarios
Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; ...
2014-12-09
We present results from an ongoing effort to extend neuromimetic machine vision algorithms to multispectral data using adaptive signal processing combined with compressive sensing and machine learning techniques. Our goal is to develop a robust classification methodology that will allow for automated discretization of the landscape into distinct units based on attributes such as vegetation, surface hydrological properties, and topographic/geomorphic characteristics. We use a Hebbian learning rule to build spectral-textural dictionaries that are tailored for classification. We learn our dictionaries from millions of overlapping multispectral image patches and then use a pursuit search to generate classification features. Land cover labelsmore » are automatically generated using unsupervised clustering of sparse approximations (CoSA). We demonstrate our method on multispectral WorldView-2 data from a coastal plain ecosystem in Barrow, Alaska. We explore learning from both raw multispectral imagery and normalized band difference indices. We explore a quantitative metric to evaluate the spectral properties of the clusters in order to potentially aid in assigning land cover categories to the cluster labels. In this study, our results suggest CoSA is a promising approach to unsupervised land cover classification in high-resolution satellite imagery.« less
Multispectral Snapshot Imagers Onboard Small Satellite Formations for Multi-Angular Remote Sensing
NASA Technical Reports Server (NTRS)
Nag, Sreeja; Hewagama, Tilak; Georgiev, Georgi; Pasquale, Bert; Aslam, Shahid; Gatebe, Charles K.
2017-01-01
Multispectral snapshot imagers are capable of producing 2D spatial images with a single exposure at selected, numerous wavelengths using the same camera, therefore operate differently from push broom or whiskbroom imagers. They are payloads of choice in multi-angular, multi-spectral imaging missions that use small satellites flying in controlled formation, to retrieve Earth science measurements dependent on the targets Bidirectional Reflectance-Distribution Function (BRDF). Narrow fields of view are needed to capture images with moderate spatial resolution. This paper quantifies the dependencies of the imagers optical system, spectral elements and camera on the requirements of the formation mission and their impact on performance metrics such as spectral range, swath and signal to noise ratio (SNR). All variables and metrics have been generated from a comprehensive, payload design tool. The baseline optical parameters selected (diameter 7 cm, focal length 10.5 cm, pixel size 20 micron, field of view 1.15 deg) and snapshot imaging technologies are available. The spectral components shortlisted were waveguide spectrometers, acousto-optic tunable filters (AOTF), electronically actuated Fabry-Perot interferometers, and integral field spectrographs. Qualitative evaluation favored AOTFs because of their low weight, small size, and flight heritage. Quantitative analysis showed that waveguide spectrometers perform better in terms of achievable swath (10-90 km) and SNR (greater than 20) for 86 wavebands, but the data volume generated will need very high bandwidth communication to downlink. AOTFs meet the external data volume caps well as the minimum spectral (wavebands) and radiometric (SNR) requirements, therefore are found to be currently feasible in spite of lower swath and SNR.
Multispectral imaging with vertical silicon nanowires
Park, Hyunsung; Crozier, Kenneth B.
2013-01-01
Multispectral imaging is a powerful tool that extends the capabilities of the human eye. However, multispectral imaging systems generally are expensive and bulky, and multiple exposures are needed. Here, we report the demonstration of a compact multispectral imaging system that uses vertical silicon nanowires to realize a filter array. Multiple filter functions covering visible to near-infrared (NIR) wavelengths are simultaneously defined in a single lithography step using a single material (silicon). Nanowires are then etched and embedded into polydimethylsiloxane (PDMS), thereby realizing a device with eight filter functions. By attaching it to a monochrome silicon image sensor, we successfully realize an all-silicon multispectral imaging system. We demonstrate visible and NIR imaging. We show that the latter is highly sensitive to vegetation and furthermore enables imaging through objects opaque to the eye. PMID:23955156
NASA Astrophysics Data System (ADS)
Neigh, C. S. R.; Carroll, M.; Wooten, M.; McCarty, J. L.; Powell, B.; Husak, G. J.; Enenkel, M.; Hain, C.
2017-12-01
Global food production in the developing world occurs within sub-hectare fields that are difficult to identify with moderate resolution satellite imagery. Knowledge about the distribution of these fields is critical in food security programs. We developed a semi-automated image segmentation approach using wall-to-wall sub-meter imagery with high-end computing (HEC) to map crop area (CA) throughout Tigray, Ethiopia that encompasses over 41,000 km2. Our approach tested multiple HEC processing streams to reduce processing time and minimize mapping error. We applied multiple resolution smoothing kernels to capture differences in land surface texture associated to CA. Typically, very-small fields (mean < 2 ha) have a smooth image roughness compared to natural scrub/shrub woody vegetation at the 1 m scale and these features can be segmented in panchromatic imagery with multi-level histogram thresholding. We found multi-temporal very-high resolution (VHR) panchromatic imagery with multi-spectral VHR and moderate resolution imagery are sufficient in extracting critical CA information needed in food security programs. We produced a 2011 ‒ 2015 CA map using over 3,000 WorldView-1 panchromatic images wall-to-wall in 1/2° mosaics for Tigray, Ethiopia in 1 week. We evaluated CA estimates with nearly 3,000 WorldView-2 2 m multispectral 250 × 250 m image subsets, with seven expert interpretations, and with in-situ global positioning system (GPS) photography. Our CA estimates ranged from 32 to 41% in sub-regions of Tigray with median maximum per bin commission and omission errors of 11% and 1% respectively, with most of the error occurring in bins less than 15%. This empirical, simple, and low direct cost approach via U.S. government license agreement and HEC could be a viable big-data methodology to extract wall-to-wall CA for other regions of the world that have very-small agriculture fields with similar image texture.
Airado-Rodríguez, Diego; Høy, Martin; Skaret, Josefine; Wold, Jens Petter
2014-05-01
The potential of multispectral imaging of autofluorescence to map sensory flavour properties and fluorophore concentrations in cod caviar paste has been investigated. Cod caviar paste was used as a case product and it was stored over time, under different headspace gas composition and light exposure conditions, to obtain a relevant span in lipid oxidation and sensory properties. Samples were divided in two sets, calibration and test sets, with 16 and 7 samples, respectively. A third set of samples was prepared with induced gradients in lipid oxidation and sensory properties by light exposure of certain parts of the sample surface. Front-face fluorescence emission images were obtained for excitation wavelength 382 nm at 11 different channels ranging from 400 to 700 nm. The analysis of the obtained sets of images was divided in two parts: First, in an effort to compress and extract relevant information, multivariate curve resolution was applied on the calibration set and three spectral components and their relative concentrations in each sample were obtained. The obtained profiles were employed to estimate the concentrations of each component in the images of the heterogeneous samples, giving chemical images of the distribution of fluorescent oxidation products, protoporphyrin IX and photoprotoporphyrin. Second, regression models for sensory attributes related to lipid oxidation were constructed based on the spectra of homogeneous samples from the calibration set. These models were successfully validated with the test set. The models were then applied for pixel-wise estimation of sensory flavours in the heterogeneous images, giving rise to sensory images. As far as we know this is the first time that sensory images of odour and flavour are obtained based on multispectral imaging. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Materne, A.; Bardoux, A.; Geoffray, H.; Tournier, T.; Kubik, P.; Morris, D.; Wallace, I.; Renard, C.
2017-11-01
The PLEIADES-HR Earth observing satellites, under CNES development, combine a 0.7m resolution panchromatic channel, and a multispectral channel allowing a 2.8 m resolution, in 4 spectral bands. The 2 satellites will be placed on a sun-synchronous orbit at an altitude of 695 km. The camera operates in push broom mode, providing images across a 20 km swath. This paper focuses on the specifications, design and performance of the TDI detectors developed by e2v technologies under CNES contract for the panchromatic channel. Design drivers, derived from the mission and satellite requirements, architecture of the sensor and measurement results for key performances of the first prototypes are presented.
NASA Astrophysics Data System (ADS)
Juutinen, Sari; Aurela, Mika; Mikola, Juha; Räsänen, Aleksi; Virtanen, Tarmo
2016-04-01
Remote sensing is a key methodology when monitoring the responses of arctic ecosystems to climatic warming. The short growing season and rapid vegetation development, however, set demands to the timing of image acquisition in the arctic. We used multispectral very high spatial resolution satellite images to study the effect of vegetation phenology on the spectral reflectance and image interpretation in the low arctic tundra in coastal Siberia (Tiksi, 71°35'39"N, 128°53'17"E). The study site mainly consists of peatlands, tussock, dwarf shrub, and grass tundra, and stony areas with some lichen and shrub patches. We tested the hypotheses that (1) plant phenology is responsive to the interannual weather variation and (2) the phenological state of vegetation has an impact on satellite image interpretation and the ability to distinguish between the plant communities. We used an empirical transfer function with temperature sums as drivers to reconstruct daily leaf area index (LAI) for the different plant communities for years 2005, and 2010-2014 based on measured LAI development in summer 2014. Satellite images, taken during growing seasons, were acquired for two years having late and early spring, and short and long growing season, respectively. LAI dynamics showed considerable interannual variation due to weather variation, and particularly the relative contribution of graminoid dominated communities was sensitive to these phenology shifts. We have also analyzed the differences in the reflectance values between the two satellite images taking account the LAI dynamics. These results will increase our understanding of the pitfalls that may arise from the timing of image acquisition when interpreting the vegetation structure in a heterogeneous tundra landscape. Very high spatial resolution multispectral images are available at reasonable cost, but not in high temporal resolution, which may lead to compromises when matching ground truth and the imagery. On the other hand, to identify existing plant communities, high resolution images are needed due fragmented nature of tundra vegetation communities. Temporal differences in the phenology among different plant functional types may also obscure the image interpretations when using spatially low resolution images in heterogeneous landscapes. Phenological features of plant communities should be acknowledged, when plant functional or community type based classifications are used in models to estimate global greenhouse gas emissions and when monitoring changes in vegetation are monitored, for example to indicate permafrost thawing or changes in growing season lengths.
Application and evaluation of ISVR method in QuickBird image fusion
NASA Astrophysics Data System (ADS)
Cheng, Bo; Song, Xiaolu
2014-05-01
QuickBird satellite images are widely used in many fields, and applications have put forward high requirements for the integration of the spatial information and spectral information of the imagery. A fusion method for high resolution remote sensing images based on ISVR is identified in this study. The core principle of ISVS is taking the advantage of radicalization targeting to remove the effect of different gain and error of satellites' sensors. Transformed from DN to radiance, the multi-spectral image's energy is used to simulate the panchromatic band. The linear regression analysis is carried through the simulation process to find a new synthetically panchromatic image, which is highly linearly correlated to the original panchromatic image. In order to evaluate, test and compare the algorithm results, this paper used ISVR and other two different fusion methods to give a comparative study of the spatial information and spectral information, taking the average gradient and the correlation coefficient as an indicator. Experiments showed that this method could significantly improve the quality of fused image, especially in preserving spectral information, to maximize the spectral information of original multispectral images, while maintaining abundant spatial information.
Classification by Using Multispectral Point Cloud Data
NASA Astrophysics Data System (ADS)
Liao, C. T.; Huang, H. H.
2012-07-01
Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.
Wetland Vegetation Integrity Assessment with Low Altitude Multispectral Uav Imagery
NASA Astrophysics Data System (ADS)
Boon, M. A.; Tesfamichael, S.
2017-08-01
The use of multispectral sensors on Unmanned Aerial Vehicles (UAVs) was until recently too heavy and bulky although this changed in recent times and they are now commercially available. The focus on the usage of these sensors is mostly directed towards the agricultural sector where the focus is on precision farming. Applications of these sensors for mapping of wetland ecosystems are rare. Here, we evaluate the performance of low altitude multispectral UAV imagery to determine the state of wetland vegetation in a localised spatial area. Specifically, NDVI derived from multispectral UAV imagery was used to inform the determination of the integrity of the wetland vegetation. Furthermore, we tested different software applications for the processing of the imagery. The advantages and disadvantages we experienced of these applications are also shortly presented in this paper. A JAG-M fixed-wing imaging system equipped with a MicaScene RedEdge multispectral camera were utilised for the survey. A single surveying campaign was undertaken in early autumn of a 17 ha study area at the Kameelzynkraal farm, Gauteng Province, South Africa. Structure-from-motion photogrammetry software was used to reconstruct the camera position's and terrain features to derive a high resolution orthoretified mosaic. MicaSense Atlas cloud-based data platform, Pix4D and PhotoScan were utilised for the processing. The WET-Health level one methodology was followed for the vegetation assessment, where wetland health is a measure of the deviation of a wetland's structure and function from its natural reference condition. An on-site evaluation of the vegetation integrity was first completed. Disturbance classes were then mapped using the high resolution multispectral orthoimages and NDVI. The WET-Health vegetation module completed with the aid of the multispectral UAV products indicated that the vegetation of the wetland is largely modified ("D" PES Category) and that the condition is expected to deteriorate (change score) in the future. However a lower impact score were determined utilising the multispectral UAV imagery and NDVI. The result is a more accurate estimation of the impacts in the wetland.
USDA-ARS?s Scientific Manuscript database
Thermal and multispectral remote sensing data from low-altitude aircraft can provide high spatial resolution necessary for sub-field (= 10 m) and plant canopy (= 1 m) scale evapotranspiration (ET) monitoring. In this study, high resolution aircraft sub-meter scale thermal infrared and multispectral...
MULTISCALE THERMAL-INFRARED MEASUREMENTS OF THE MAUNA LOA CALDERA, HAWAII
DOE Office of Scientific and Technical Information (OSTI.GOV)
L. BALICK; A. GILLESPIE; ET AL
2001-03-01
Until recently, most thermal infrared measurements of natural scenes have been made at disparate scales, typically 10{sup {minus}3}-10{sup {minus}2} m (spectra) and 10{sup 2}-10{sup 3} m (satellite images), with occasional airborne images (10{sup 1} m) filling the gap. Temperature and emissivity fields are spatially heterogeneous over a similar range of scales, depending on scene composition. A common problem for the land surface, therefore, has been relating field spectral and temperature measurements to satellite data, yet in many cases this is necessary if satellite data are to be interpreted to yield meaningful information about the land surface. Recently, three new satellitesmore » with thermal imaging capability at the 10{sup 1}-10{sup 2} m scale have been launched: MTI, TERRA, and Landsat 7. MTI acquires multispectral images in the mid-infrared (3-5{micro}m) and longwave infrared (8-10{micro}m) with 20m resolution. ASTER and MODIS aboard TERRA acquire multispectral longwave images at 90m and 500-1000m, respectively, and MODIS also acquires multispectral mid-infrared images. Landsat 7 acquires broadband longwave images at 60m. As part of an experiment to validate the temperature and thermal emissivity values calculated from MTI and ASTER images, we have targeted the summit region of Mauna Loa for field characterization and near-simultaneous satellite imaging, both on daytime and nighttime overpasses, and compare the results to previously acquired 10{sup {minus}1} m airborne images, ground-level multispectral FLIR images, and the field spectra. Mauna Loa was chosen in large part because the 4x6km summit caldera, flooded with fresh basalt in 1984, appears to be spectrally homogeneous at scales between 10{sup {minus}1} and 10{sup 2} m, facilitating the comparison of sensed temperature. The validation results suggest that, with careful atmospheric compensation, it is possible to match ground measurements with measurements from space, and to use the Mauna Loa validation site for cross-comparison of thermal infrared sensors and temperature/emissivity extraction algorithms.« less
NASA Technical Reports Server (NTRS)
Cassinis, R. (Principal Investigator); Lechi, G. M.; Marino, C. M.; Tonelli, A. M.
1974-01-01
The author has identified the following significant results. A method has been suggested for the forecasting of the lateral eruptions of Mount Etna, through the multispectral analysis of the vegetation behavior. Unknown geological lineaments which seem to be related to deep crustal movements have been discovered using the ERTS-1 imagery. Results in the geological field were obtained in the study of the general structure of the Alpine range. In the field of official vegetation classification, ERTS-1 images were used for a preliminary study of rice fields in northern Italy. Very good experimental results have been obtained using the Skylab multispectral photographs. In the field of hydrogeology and soil type discrimination discoveries of unknown paleoriver beds have been made in the northeastern part of the Po Valley using the multispectral imagery of SL3. The superior resolution of Skylab was a fundamental element for the success of this investigation.
Evaluation of Skybox Video and Still Image products
NASA Astrophysics Data System (ADS)
d'Angelo, P.; Kuschk, G.; Reinartz, P.
2014-11-01
The SkySat-1 satellite lauched by Skybox Imaging on November 21 in 2013 opens a new chapter in civilian earth observation as it is the first civilian satellite to image a target in high definition panchromatic video for up to 90 seconds. The small satellite with a mass of 100 kg carries a telescope with 3 frame sensors. Two products are available: Panchromatic video with a resolution of around 1 meter and a frame size of 2560 × 1080 pixels at 30 frames per second. Additionally, the satellite can collect still imagery with a swath of 8 km in the panchromatic band, and multispectral images with 4 bands. Using super-resolution techniques, sub-meter accuracy is reached for the still imagery. The paper provides an overview of the satellite design and imaging products. The still imagery product consists of 3 stripes of frame images with a footprint of approximately 2.6 × 1.1 km. Using bundle block adjustment, the frames are registered, and their accuracy is evaluated. Image quality of the panchromatic, multispectral and pansharpened products are evaluated. The video product used in this evaluation consists of a 60 second gazing acquisition of Las Vegas. A DSM is generated by dense stereo matching. Multiple techniques such as pairwise matching or multi image matching are used and compared. As no ground truth height reference model is availble to the authors, comparisons on flat surface and compare differently matched DSMs are performed. Additionally, visual inspection of DSM and DSM profiles show a detailed reconstruction of small features and large skyscrapers.
Radiometric calibration of the Earth observing system's imaging sensors
NASA Technical Reports Server (NTRS)
Slater, P. N.
1987-01-01
Philosophy, requirements, and methods of calibration of multispectral space sensor systems as applicable to the Earth Observing System (EOS) are discussed. Vicarious methods for calibration of low spatial resolution systems, with respect to the Advanced Very High Resolution Radiometer (AVHRR), are then summarized. Finally, a theoretical introduction is given to a new vicarious method of calibration using the ratio of diffuse-to-global irradiance at the Earth's surfaces as the key input. This may provide an additional independent method for in-flight calibration.
Intensity-hue-saturation-based image fusion using iterative linear regression
NASA Astrophysics Data System (ADS)
Cetin, Mufit; Tepecik, Abdulkadir
2016-10-01
The image fusion process basically produces a high-resolution image by combining the superior features of a low-resolution spatial image and a high-resolution panchromatic image. Despite its common usage due to its fast computing capability and high sharpening ability, the intensity-hue-saturation (IHS) fusion method may cause some color distortions, especially when a large number of gray value differences exist among the images to be combined. This paper proposes a spatially adaptive IHS (SA-IHS) technique to avoid these distortions by automatically adjusting the exact spatial information to be injected into the multispectral image during the fusion process. The SA-IHS method essentially suppresses the effects of those pixels that cause the spectral distortions by assigning weaker weights to them and avoiding a large number of redundancies on the fused image. The experimental database consists of IKONOS images, and the experimental results both visually and statistically prove the enhancement of the proposed algorithm when compared with the several other IHS-like methods such as IHS, generalized IHS, fast IHS, and generalized adaptive IHS.
Markov-random-field-based super-resolution mapping for identification of urban trees in VHR images
NASA Astrophysics Data System (ADS)
Ardila, Juan P.; Tolpekin, Valentyn A.; Bijker, Wietske; Stein, Alfred
2011-11-01
Identification of tree crowns from remote sensing requires detailed spectral information and submeter spatial resolution imagery. Traditional pixel-based classification techniques do not fully exploit the spatial and spectral characteristics of remote sensing datasets. We propose a contextual and probabilistic method for detection of tree crowns in urban areas using a Markov random field based super resolution mapping (SRM) approach in very high resolution images. Our method defines an objective energy function in terms of the conditional probabilities of panchromatic and multispectral images and it locally optimizes the labeling of tree crown pixels. Energy and model parameter values are estimated from multiple implementations of SRM in tuning areas and the method is applied in QuickBird images to produce a 0.6 m tree crown map in a city of The Netherlands. The SRM output shows an identification rate of 66% and commission and omission errors in small trees and shrub areas. The method outperforms tree crown identification results obtained with maximum likelihood, support vector machines and SRM at nominal resolution (2.4 m) approaches.
NASA Astrophysics Data System (ADS)
Xu, Yiming; Smith, Scot E.; Grunwald, Sabine; Abd-Elrahman, Amr; Wani, Suhas P.
2017-01-01
Soil prediction models based on spectral indices from some multispectral images are too coarse to characterize spatial pattern of soil properties in small and heterogeneous agricultural lands. Image pan-sharpening has seldom been utilized in Digital Soil Mapping research before. This research aimed to analyze the effects of pan-sharpened (PAN) remote sensing spectral indices on soil prediction models in smallholder farm settings. This research fused the panchromatic band and multispectral (MS) bands of WorldView-2, GeoEye-1, and Landsat 8 images in a village in Southern India by Brovey, Gram-Schmidt and Intensity-Hue-Saturation methods. Random Forest was utilized to develop soil total nitrogen (TN) and soil exchangeable potassium (Kex) prediction models by incorporating multiple spectral indices from the PAN and MS images. Overall, our results showed that PAN remote sensing spectral indices have similar spectral characteristics with soil TN and Kex as MS remote sensing spectral indices. There is no soil prediction model incorporating the specific type of pan-sharpened spectral indices always had the strongest prediction capability of soil TN and Kex. The incorporation of pan-sharpened remote sensing spectral data not only increased the spatial resolution of the soil prediction maps, but also enhanced the prediction accuracy of soil prediction models. Small farms with limited footprint, fragmented ownership and diverse crop cycle should benefit greatly from the pan-sharpened high spatial resolution imagery for soil property mapping. Our results show that multiple high and medium resolution images can be used to map soil properties suggesting the possibility of an improvement in the maps' update frequency. Additionally, the results should benefit the large agricultural community through the reduction of routine soil sampling cost and improved prediction accuracy.
Observation of SO2 degassing at Stromboli volcano using a hyperspectral thermal infrared imager
NASA Astrophysics Data System (ADS)
Smekens, Jean-François; Gouhier, Mathieu
2018-05-01
Thermal infrared (TIR) imaging is a common tool for the monitoring of volcanic activity. Broadband cameras with increasing sampling frequency give great insight into the physical processes taking place during effusive and explosive event, while Fourier transform infrared (FTIR) methods provide high resolution spectral information used to assess the composition of volcanic gases but are often limited to a single point of interest. Continuing developments in detector technology have given rise to a new class of hyperspectral imagers combining the advantages of both approaches. In this work, we present the results of our observations of volcanic activity at Stromboli volcano with a ground-based imager, the Telops Hyper-Cam LW, when used to detect emissions of sulfur dioxide (SO2) produced at the vent, with data acquired at Stromboli volcano (Italy) in early October of 2015. We have developed an innovative technique based on a curve-fitting algorithm to quickly extract spectral information from high-resolution datasets, allowing fast and reliable identification of SO2. We show in particular that weak SO2 emissions, such as inter-eruptive gas puffing, can be easily detected using this technology, even with poor weather conditions during acquisition (e.g., high relative humidity, presence of fog and/or ash). Then, artificially reducing the spectral resolution of the instrument, we recreated a variety of commonly used multispectral configurations to examine the efficiency of four qualitative SO2 indicators based on simple Brightness Temperature Difference (BTD). Our results show that quickly changing conditions at the vent - including but not limited to the presence of summit fog - render the establishment of meaningful thresholds for BTD indicators difficult. Building on those results, we propose recommendations on the use of multispectral imaging for SO2 monitoring and routine measurements from ground-based instruments.
NASA Astrophysics Data System (ADS)
Navratil, Peter; Wilps, Hans
2013-01-01
Three different object-based image classification techniques are applied to high-resolution satellite data for the mapping of the habitats of Asian migratory locust (Locusta migratoria migratoria) in the southern Aral Sea basin, Uzbekistan. A set of panchromatic and multispectral Système Pour l'Observation de la Terre-5 satellite images was spectrally enhanced by normalized difference vegetation index and tasseled cap transformation and segmented into image objects, which were then classified by three different classification approaches: a rule-based hierarchical fuzzy threshold (HFT) classification method was compared to a supervised nearest neighbor classifier and classification tree analysis by the quick, unbiased, efficient statistical trees algorithm. Special emphasis was laid on the discrimination of locust feeding and breeding habitats due to the significance of this discrimination for practical locust control. Field data on vegetation and land cover, collected at the time of satellite image acquisition, was used to evaluate classification accuracy. The results show that a robust HFT classifier outperformed the two automated procedures by 13% overall accuracy. The classification method allowed a reliable discrimination of locust feeding and breeding habitats, which is of significant importance for the application of the resulting data for an economically and environmentally sound control of locust pests because exact spatial knowledge on the habitat types allows a more effective surveying and use of pesticides.
The Multispectral Imaging Science Working Group. Volume 2: Working group reports
NASA Technical Reports Server (NTRS)
Cox, S. C. (Editor)
1982-01-01
Summaries of the various multispectral imaging science working groups are presented. Current knowledge of the spectral and spatial characteristics of the Earth's surface is outlined and the present and future capabilities of multispectral imaging systems are discussed.
NASA Technical Reports Server (NTRS)
Brand, R. R.; Barker, J. L.
1983-01-01
A multistage sampling procedure using image processing, geographical information systems, and analytical photogrammetry is presented which can be used to guide the collection of representative, high-resolution spectra and discrete reflectance targets for future satellite sensors. The procedure is general and can be adapted to characterize areas as small as minor watersheds and as large as multistate regions. Beginning with a user-determined study area, successive reductions in size and spectral variation are performed using image analysis techniques on data from the Multispectral Scanner, orbital and simulated Thematic Mapper, low altitude photography synchronized with the simulator, and associated digital data. An integrated image-based geographical information system supports processing requirements.
Evaluating an image-fusion algorithm with synthetic-image-generation tools
NASA Astrophysics Data System (ADS)
Gross, Harry N.; Schott, John R.
1996-06-01
An algorithm that combines spectral mixing and nonlinear optimization is used to fuse multiresolution images. Image fusion merges images of different spatial and spectral resolutions to create a high spatial resolution multispectral combination. High spectral resolution allows identification of materials in the scene, while high spatial resolution locates those materials. In this algorithm, conventional spectral mixing estimates the percentage of each material (called endmembers) within each low resolution pixel. Three spectral mixing models are compared; unconstrained, partially constrained, and fully constrained. In the partially constrained application, the endmember fractions are required to sum to one. In the fully constrained application, all fractions are additionally required to lie between zero and one. While negative fractions seem inappropriate, they can arise from random spectral realizations of the materials. In the second part of the algorithm, the low resolution fractions are used as inputs to a constrained nonlinear optimization that calculates the endmember fractions for the high resolution pixels. The constraints mirror the low resolution constraints and maintain consistency with the low resolution fraction results. The algorithm can use one or more higher resolution sharpening images to locate the endmembers to high spatial accuracy. The algorithm was evaluated with synthetic image generation (SIG) tools. A SIG developed image can be used to control the various error sources that are likely to impair the algorithm performance. These error sources include atmospheric effects, mismodeled spectral endmembers, and variability in topography and illumination. By controlling the introduction of these errors, the robustness of the algorithm can be studied and improved upon. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution, fusing them with high resolution sharpening images will produce a higher spatial resolution land cover or material map.
An airborne thematic thermal infrared and electro-optical imaging system
NASA Astrophysics Data System (ADS)
Sun, Xiuhong; Shu, Peter
2011-08-01
This paper describes an advanced Airborne Thematic Thermal InfraRed and Electro-Optical Imaging System (ATTIREOIS) and its potential applications. ATTIREOIS sensor payload consists of two sets of advanced Focal Plane Arrays (FPAs) - a broadband Thermal InfraRed Sensor (TIRS) and a four (4) band Multispectral Electro-Optical Sensor (MEOS) to approximate Landsat ETM+ bands 1,2,3,4, and 6, and LDCM bands 2,3,4,5, and 10+11. The airborne TIRS is 3-axis stabilized payload capable of providing 3D photogrammetric images with a 1,850 pixel swathwidth via pushbroom operation. MEOS has a total of 116 million simultaneous sensor counts capable of providing 3 cm spatial resolution multispectral orthophotos for continuous airborne mapping. ATTIREOIS is a complete standalone and easy-to-use portable imaging instrument for light aerial vehicle deployment. Its miniaturized backend data system operates all ATTIREOIS imaging sensor components, an INS/GPS, and an e-Gimbal™ Control Electronic Unit (ECU) with a data throughput of 300 Megabytes/sec. The backend provides advanced onboard processing, performing autonomous raw sensor imagery development, TIRS image track-recovery reconstruction, LWIR/VNIR multi-band co-registration, and photogrammetric image processing. With geometric optics and boresight calibrations, the ATTIREOIS data products are directly georeferenced with an accuracy of approximately one meter. A prototype ATTIREOIS has been configured. Its sample LWIR/EO image data will be presented. Potential applications of ATTIREOIS include: 1) Providing timely and cost-effective, precisely and directly georeferenced surface emissive and solar reflective LWIR/VNIR multispectral images via a private Google Earth Globe to enhance NASA's Earth science research capabilities; and 2) Underflight satellites to support satellite measurement calibration and validation observations.
Smartphone-based multispectral imaging: system development and potential for mobile skin diagnosis.
Kim, Sewoong; Cho, Dongrae; Kim, Jihun; Kim, Manjae; Youn, Sangyeon; Jang, Jae Eun; Je, Minkyu; Lee, Dong Hun; Lee, Boreom; Farkas, Daniel L; Hwang, Jae Youn
2016-12-01
We investigate the potential of mobile smartphone-based multispectral imaging for the quantitative diagnosis and management of skin lesions. Recently, various mobile devices such as a smartphone have emerged as healthcare tools. They have been applied for the early diagnosis of nonmalignant and malignant skin diseases. Particularly, when they are combined with an advanced optical imaging technique such as multispectral imaging and analysis, it would be beneficial for the early diagnosis of such skin diseases and for further quantitative prognosis monitoring after treatment at home. Thus, we demonstrate here the development of a smartphone-based multispectral imaging system with high portability and its potential for mobile skin diagnosis. The results suggest that smartphone-based multispectral imaging and analysis has great potential as a healthcare tool for quantitative mobile skin diagnosis.
Multispectral computational ghost imaging with multiplexed illumination
NASA Astrophysics Data System (ADS)
Huang, Jian; Shi, Dongfeng
2017-07-01
Computational ghost imaging has attracted wide attention from researchers in many fields over the last two decades. Multispectral imaging as one application of computational ghost imaging possesses spatial and spectral resolving abilities, and is very useful for surveying scenes and extracting detailed information. Existing multispectral imagers mostly utilize narrow band filters or dispersive optical devices to separate light of different wavelengths, and then use multiple bucket detectors or an array detector to record them separately. Here, we propose a novel multispectral ghost imaging method that uses one single bucket detector with multiplexed illumination to produce a colored image. The multiplexed illumination patterns are produced by three binary encoded matrices (corresponding to the red, green and blue colored information, respectively) and random patterns. The results of the simulation and experiment have verified that our method can be effective in recovering the colored object. Multispectral images are produced simultaneously by one single-pixel detector, which significantly reduces the amount of data acquisition.
Study on multispectral imaging detection and recognition
NASA Astrophysics Data System (ADS)
Jun, Wang; Na, Ding; Gao, Jiaobo; Yu, Hu; Jun, Wu; Li, Junna; Zheng, Yawei; Fei, Gao; Sun, Kefeng
2009-07-01
Multispectral imaging detecting technology use target radiation character in spectral spatial distribution and relation between spectral and image to detect target and remote sensing measure. Its speciality is multi channel, narrow bandwidth, large amount of information, high accuracy. The ability of detecting target in environment of clutter, camouflage, concealment and beguilement is improved. At present, spectral imaging technology in the range of multispectral and hyperspectral develop greatly. The multispectral imaging equipment of unmanned aerial vehicle can be used in mine detection, information, surveillance and reconnaissance. Spectral imaging spectrometer operating in MWIR and LWIR has already been applied in the field of remote sensing and military in the advanced country. The paper presents the technology of multispectral imaging. It can enhance the reflectance, scatter and radiation character of the artificial targets among nature background. The targets among complex background and camouflage/stealth targets can be effectively identified. The experiment results and the data of spectral imaging is obtained.
Multispectral Imaging for Determination of Astaxanthin Concentration in Salmonids
Dissing, Bjørn S.; Nielsen, Michael E.; Ersbøll, Bjarne K.; Frosch, Stina
2011-01-01
Multispectral imaging has been evaluated for characterization of the concentration of a specific cartenoid pigment; astaxanthin. 59 fillets of rainbow trout, Oncorhynchus mykiss, were filleted and imaged using a rapid multispectral imaging device for quantitative analysis. The multispectral imaging device captures reflection properties in 19 distinct wavelength bands, prior to determination of the true concentration of astaxanthin. The samples ranged from 0.20 to 4.34 g per g fish. A PLSR model was calibrated to predict astaxanthin concentration from novel images, and showed good results with a RMSEP of 0.27. For comparison a similar model were built for normal color images, which yielded a RMSEP of 0.45. The acquisition speed of the multispectral imaging system and the accuracy of the PLSR model obtained suggest this method as a promising technique for rapid in-line estimation of astaxanthin concentration in rainbow trout fillets. PMID:21573000
Surface temperature statistics over Los Angeles - The influence of land use
NASA Technical Reports Server (NTRS)
Dousset, Benedicte
1991-01-01
Surface temperature statistics from 84 NOAA AVHRR (Advanced Very High Resolution Radiometer) satellite images of the Los Angeles basin are interpreted as functions of the corresponding urban land-cover classified from a multispectral SPOT image. Urban heat islands observed in the temperature statistics correlate well with the distribution of industrial and fully built areas. Small cool islands coincide with highly watered parks and golf courses. There is a significant negative correlation between the afternoon surface temperature and a vegetation index computed from the SPOT image.
MTI science, data products, and ground-data processing overview
NASA Astrophysics Data System (ADS)
Szymanski, John J.; Atkins, William H.; Balick, Lee K.; Borel, Christoph C.; Clodius, William B.; Christensen, R. Wynn; Davis, Anthony B.; Echohawk, J. C.; Galbraith, Amy E.; Hirsch, Karen L.; Krone, James B.; Little, Cynthia K.; McLachlan, Peter M.; Morrison, Aaron; Pollock, Kimberly A.; Pope, Paul A.; Novak, Curtis; Ramsey, Keri A.; Riddle, Emily E.; Rohde, Charles A.; Roussel-Dupre, Diane C.; Smith, Barham W.; Smith, Kathy; Starkovich, Kim; Theiler, James P.; Weber, Paul G.
2001-08-01
The mission of the Multispectral Thermal Imager (MTI) satellite is to demonstrate the efficacy of highly accurate multispectral imaging for passive characterization of urban and industrial areas, as well as sites of environmental interest. The satellite makes top-of-atmosphere radiance measurements that are subsequently processed into estimates of surface properties such as vegetation health, temperatures, material composition and others. The MTI satellite also provides simultaneous data for atmospheric characterization at high spatial resolution. To utilize these data the MTI science program has several coordinated components, including modeling, comprehensive ground-truth measurements, image acquisition planning, data processing and data interpretation and analysis. Algorithms have been developed to retrieve a multitude of physical quantities and these algorithms are integrated in a processing pipeline architecture that emphasizes automation, flexibility and programmability. In addition, the MTI science team has produced detailed site, system and atmospheric models to aid in system design and data analysis. This paper provides an overview of the MTI research objectives, data products and ground data processing.
Improving multispectral satellite image compression using onboard subpixel registration
NASA Astrophysics Data System (ADS)
Albinet, Mathieu; Camarero, Roberto; Isnard, Maxime; Poulet, Christophe; Perret, Jokin
2013-09-01
Future CNES earth observation missions will have to deal with an ever increasing telemetry data rate due to improvements in resolution and addition of spectral bands. Current CNES image compressors implement a discrete wavelet transform (DWT) followed by a bit plane encoding (BPE) but only on a mono spectral basis and do not profit from the multispectral redundancy of the observed scenes. Recent CNES studies have proven a substantial gain on the achievable compression ratio, +20% to +40% on selected scenarios, by implementing a multispectral compression scheme based on a Karhunen Loeve transform (KLT) followed by the classical DWT+BPE. But such results can be achieved only on perfectly registered bands; a default of registration as low as 0.5 pixel ruins all the benefits of multispectral compression. In this work, we first study the possibility to implement a multi-bands subpixel onboard registration based on registration grids generated on-the-fly by the satellite attitude control system and simplified resampling and interpolation techniques. Indeed bands registration is usually performed on ground using sophisticated techniques too computationally intensive for onboard use. This fully quantized algorithm is tuned to meet acceptable registration performances within stringent image quality criteria, with the objective of onboard real-time processing. In a second part, we describe a FPGA implementation developed to evaluate the design complexity and, by extrapolation, the data rate achievable on a spacequalified ASIC. Finally, we present the impact of this approach on the processing chain not only onboard but also on ground and the impacts on the design of the instrument.
Lock-in imaging with synchronous digital mirror demodulation
NASA Astrophysics Data System (ADS)
Bush, Michael G.
2010-04-01
Lock-in imaging enables high contrast imaging in adverse conditions by exploiting a modulated light source and homodyne detection. We report results on a patent pending lock-in imaging system fabricated from commercial-off-theshelf parts utilizing standard cameras and a spatial light modulator. By leveraging the capabilities of standard parts we are able to present a low cost, high resolution, high sensitivity camera with applications in search and rescue, friend or foe identification (IFF), and covert surveillance. Different operating modes allow the same instrument to be utilized for dual band multispectral imaging or high dynamic range imaging, increasing the flexibility in different operational settings.
Nondestructive prediction of pork freshness parameters using multispectral scattering images
NASA Astrophysics Data System (ADS)
Tang, Xiuying; Li, Cuiling; Peng, Yankun; Chao, Kuanglin; Wang, Mingwu
2012-05-01
Optical technology is an important and immerging technology for non-destructive and rapid detection of pork freshness. This paper studied on the possibility of using multispectral imaging technique and scattering characteristics to predict the freshness parameters of pork meat. The pork freshness parameters selected for prediction included total volatile basic nitrogen (TVB-N), color parameters (L *, a *, b *), and pH value. Multispectral scattering images were obtained from pork sample surface by a multispectral imaging system developed by ourselves; they were acquired at the selected narrow wavebands whose center wavelengths were 517,550, 560, 580, 600, 760, 810 and 910nm. In order to extract scattering characteristics from multispectral images at multiple wavelengths, a Lorentzian distribution (LD) function with four parameters (a: scattering asymptotic value; b: scattering peak; c: scattering width; d: scattering slope) was used to fit the scattering curves at the selected wavelengths. The results show that the multispectral imaging technique combined with scattering characteristics is promising for predicting the freshness parameters of pork meat.
Impervious surfaces mapping using high resolution satellite imagery
NASA Astrophysics Data System (ADS)
Shirmeen, Tahmina
In recent years, impervious surfaces have emerged not only as an indicator of the degree of urbanization, but also as an indicator of environmental quality. As impervious surface area increases, storm water runoff increases in velocity, quantity, temperature and pollution load. Any of these attributes can contribute to the degradation of natural hydrology and water quality. Various image processing techniques have been used to identify the impervious surfaces, however, most of the existing impervious surface mapping tools used moderate resolution imagery. In this project, the potential of standard image processing techniques to generate impervious surface data for change detection analysis using high-resolution satellite imagery was evaluated. The city of Oxford, MS was selected as the study site for this project. Standard image processing techniques, including Normalized Difference Vegetation Index (NDVI), Principal Component Analysis (PCA), a combination of NDVI and PCA, and image classification algorithms, were used to generate impervious surfaces from multispectral IKONOS and QuickBird imagery acquired in both leaf-on and leaf-off conditions. Accuracy assessments were performed, using truth data generated by manual classification, with Kappa statistics and Zonal statistics to select the most appropriate image processing techniques for impervious surface mapping. The performance of selected image processing techniques was enhanced by incorporating Soil Brightness Index (SBI) and Greenness Index (GI) derived from Tasseled Cap Transformed (TCT) IKONOS and QuickBird imagery. A time series of impervious surfaces for the time frame between 2001 and 2007 was made using the refined image processing techniques to analyze the changes in IS in Oxford. It was found that NDVI and the combined NDVI--PCA methods are the most suitable image processing techniques for mapping impervious surfaces in leaf-off and leaf-on conditions respectively, using high resolution multispectral imagery. It was also found that IS data generated by these techniques can be refined by removing the conflicting dry soil patches using SBI and GI obtained from TCT of the same imagery used for IS data generation. The change detection analysis of the IS time series shows that Oxford experienced the major changes in IS from the year 2001 to 2004 and 2006 to 2007.
The Rich Color Variations of Pluto
2015-09-24
NASA's New Horizons spacecraft captured this high-resolution enhanced color view of Pluto on July 14, 2015. The image combines blue, red and infrared images taken by the Ralph/Multispectral Visual Imaging Camera (MVIC). Pluto's surface sports a remarkable range of subtle colors, enhanced in this view to a rainbow of pale blues, yellows, oranges, and deep reds. Many landforms have their own distinct colors, telling a complex geological and climatological story that scientists have only just begun to decode. The image resolves details and colors on scales as small as 0.8 miles (1.3 kilometers). http://photojournal.jpl.nasa.gov/catalog/PIA19952
In-flight edge response measurements for high-spatial-resolution remote sensing systems
NASA Astrophysics Data System (ADS)
Blonski, Slawomir; Pagnutti, Mary A.; Ryan, Robert; Zanoni, Vickie
2002-09-01
In-flight measurements of spatial resolution were conducted as part of the NASA Scientific Data Purchase Verification and Validation process. Characterization included remote sensing image products with ground sample distance of 1 meter or less, such as those acquired with the panchromatic imager onboard the IKONOS satellite and the airborne ADAR System 5500 multispectral instrument. Final image products were used to evaluate the effects of both the image acquisition system and image post-processing. Spatial resolution was characterized by full width at half maximum of an edge-response-derived line spread function. The edge responses were analyzed using the tilted-edge technique that overcomes the spatial sampling limitations of the digital imaging systems. As an enhancement to existing algorithms, the slope of the edge response and the orientation of the edge target were determined by a single computational process. Adjacent black and white square panels, either painted on a flat surface or deployed as tarps, formed the ground-based edge targets used in the tests. Orientation of the deployable tarps was optimized beforehand, based on simulations of the imaging system. The effects of such factors as acquisition geometry, temporal variability, Modulation Transfer Function compensation, and ground sample distance on spatial resolution were investigated.
Baikejiang, Reheman; Zhang, Wei; Li, Changqing
2017-01-01
Diffuse optical tomography (DOT) has attracted attentions in the last two decades due to its intrinsic sensitivity in imaging chromophores of tissues such as hemoglobin, water, and lipid. However, DOT has not been clinically accepted yet due to its low spatial resolution caused by strong optical scattering in tissues. Structural guidance provided by an anatomical imaging modality enhances the DOT imaging substantially. Here, we propose a computed tomography (CT) guided multispectral DOT imaging system for breast cancer imaging. To validate its feasibility, we have built a prototype DOT imaging system which consists of a laser at the wavelength of 650 nm and an electron multiplying charge coupled device (EMCCD) camera. We have validated the CT guided DOT reconstruction algorithms with numerical simulations and phantom experiments, in which different imaging setup parameters, such as projection number of measurements and width of measurement patch, have been investigated. Our results indicate that an air-cooling EMCCD camera is good enough for the transmission mode DOT imaging. We have also found that measurements at six angular projections are sufficient for DOT to reconstruct the optical targets with 2 and 4 times absorption contrast when the CT guidance is applied. Finally, we have described our future research plan on integration of a multispectral DOT imaging system into a breast CT scanner.
Lunar and Planetary Science XXXV: Lunar Remote Sensing: Seeing the Big Picture
NASA Technical Reports Server (NTRS)
2004-01-01
The session "Lunar Remote Sensing: Seeing the Big Picture" contained the following reports:Approaches for Approximating Topography in High Resolution, Multispectral Data; Verification of Quality and Compatibility for the Newly Calibrated Clementine NIR Data Set; Near Infrared Spectral Properties of Selected Nearside and Farside Sites ; Global Comparisons of Mare Volcanism from Clementine Near-Infrared Data; Testing the Relation Between UVVIS Color and TiO2 Composition in the Lunar Maria; Color Reflectance Trends in the Mare: Implications for Mapping Iron with Multispectral Images ; The Composition of the Lunar Megaregolith: Some Initial Results from Global Mapping; Global Images of Mg-Number Derived from Clementine Data; The Origin of Lunar Crater Rays; Properties of Lunar Crater Ejecta from New 70-cm Radar Observations ; Permanent Sunlight at the Lunar North Pole; and ESA s SMART-1 Mission to the Moon: Goals, Status and First Results.
NASA Astrophysics Data System (ADS)
Mansoor, Awais; Robinson, J. Paul; Rajwa, Bartek
2009-02-01
Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in a compressed format. A considerable number of techniques based on interband and intraband redundancies of multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal data. However, these works have been carried out mostly in the elds of remote sensing and video processing. Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal performance of 2D compression techniques in compressing 3D images. Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image. The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands at the same level of decomposition. The insignicant quadtrees in dierent subbands in the high-frequency subband class are coded by a combined function to reduce redundancy. A number of experiments conducted on microscopic multispectral images have shown promising results for the proposed method over current state-of-the-art image-compression techniques.
Gyrocopter-Based Remote Sensing Platform
NASA Astrophysics Data System (ADS)
Weber, I.; Jenal, A.; Kneer, C.; Bongartz, J.
2015-04-01
In this paper the development of a lightweight and highly modularized airborne sensor platform for remote sensing applications utilizing a gyrocopter as a carrier platform is described. The current sensor configuration consists of a high resolution DSLR camera for VIS-RGB recordings. As a second sensor modality, a snapshot hyperspectral camera was integrated in the aircraft. Moreover a custom-developed thermal imaging system composed of a VIS-PAN camera and a LWIR-camera is used for aerial recordings in the thermal infrared range. Furthermore another custom-developed highly flexible imaging system for high resolution multispectral image acquisition with up to six spectral bands in the VIS-NIR range is presented. The performance of the overall system was tested during several flights with all sensor modalities and the precalculated demands with respect to spatial resolution and reliability were validated. The collected data sets were georeferenced, georectified, orthorectified and then stitched to mosaics.
MULTISPECTRAL IDENTIFICATION OF ALKYL AND CHLOROALKYL PHOSPHATES FROM AN INDUSTRIAL EFFLUENT
Multispectral techniques (gas chromatography combined with low and high resolution electron-impact mass spectrometry, low and high resolution chemical ionization mass spectrometry, and Fourier transform infrared mass spectroscopy) were used to identify 13 alkyl and chloralkyl pho...
Lange, Maximilian; Dechant, Benjamin; Rebmann, Corinna; Vohland, Michael; Cuntz, Matthias; Doktor, Daniel
2017-08-11
Quantifying the accuracy of remote sensing products is a timely endeavor given the rapid increase in Earth observation missions. A validation site for Sentinel-2 products was hence established in central Germany. Automatic multispectral and hyperspectral sensor systems were installed in parallel with an existing eddy covariance flux tower, providing spectral information of the vegetation present at high temporal resolution. Normalized Difference Vegetation Index (NDVI) values from ground-based hyperspectral and multispectral sensors were compared with NDVI products derived from Sentinel-2A and Moderate-resolution Imaging Spectroradiometer (MODIS). The influence of different spatial and temporal resolutions was assessed. High correlations and similar phenological patterns between in situ and satellite-based NDVI time series demonstrated the reliability of satellite-based phenological metrics. Sentinel-2-derived metrics showed better agreement with in situ measurements than MODIS-derived metrics. Dynamic filtering with the best index slope extraction algorithm was nevertheless beneficial for Sentinel-2 NDVI time series despite the availability of quality information from the atmospheric correction procedure.
Lange, Maximilian; Rebmann, Corinna; Cuntz, Matthias; Doktor, Daniel
2017-01-01
Quantifying the accuracy of remote sensing products is a timely endeavor given the rapid increase in Earth observation missions. A validation site for Sentinel-2 products was hence established in central Germany. Automatic multispectral and hyperspectral sensor systems were installed in parallel with an existing eddy covariance flux tower, providing spectral information of the vegetation present at high temporal resolution. Normalized Difference Vegetation Index (NDVI) values from ground-based hyperspectral and multispectral sensors were compared with NDVI products derived from Sentinel-2A and Moderate-resolution Imaging Spectroradiometer (MODIS). The influence of different spatial and temporal resolutions was assessed. High correlations and similar phenological patterns between in situ and satellite-based NDVI time series demonstrated the reliability of satellite-based phenological metrics. Sentinel-2-derived metrics showed better agreement with in situ measurements than MODIS-derived metrics. Dynamic filtering with the best index slope extraction algorithm was nevertheless beneficial for Sentinel-2 NDVI time series despite the availability of quality information from the atmospheric correction procedure. PMID:28800065
Multispectral image fusion for target detection
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-09-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
MOVING BEYOND COLOR: THE CASE FOR MULTISPECTRAL IMAGING IN BRIGHTFIELD PATHOLOGY.
Cukierski, William J; Qi, Xin; Foran, David J
2009-01-01
A multispectral camera is capable of imaging a histologic slide at narrow bandwidths over the range of the visible spectrum. While several uses for multispectral imaging (MSI) have been demonstrated in pathology [1, 2], there is no unified consensus over when and how MSI might benefit automated analysis [3, 4]. In this work, we use a linear-algebra framework to investigate the relationship between the spectral image and its standard-image counterpart. The multispectral "cube" is treated as an extension of a traditional image in a high-dimensional color space. The concept of metamers is introduced and used to derive regions of the visible spectrum where MSI may provide an advantage. Furthermore, histological stains which are amenable to analysis by MSI are reported. We show the Commission internationale de l'éclairage (CIE) 1931 transformation from spectrum to color is non-neighborhood preserving. Empirical results are demonstrated on multispectral images of peripheral blood smears.
The Effectiveness of Hydrothermal Alteration Mapping based on Hyperspectral Data in Tropical Region
NASA Astrophysics Data System (ADS)
Muhammad, R. R. D.; Saepuloh, A.
2016-09-01
Hyperspectral remote sensing could be used to characterize targets at earth's surface based on their spectra. This capability is useful for mapping and characterizing the distribution of host rocks, alteration assemblages, and minerals. Contrary to the multispectral sensors, the hyperspectral identifies targets with high spectral resolution. The Wayang Windu Geothermal field in West Java, Indonesia was selected as the study area due to the existence of surface manifestation and dense vegetation environment. Therefore, the effectiveness of hyperspectral remote sensing in tropical region was targeted as the study objective. The Spectral Angle Mapper (SAM) method was used to detect the occurrence of clay minerals spatially from Hyperion data. The SAM references of reflectance spectra were obtained from field observation at altered materials. To calculate the effectiveness of hyperspectral data, we used multispectral data from Landsat-8. The comparison method was conducted by comparing the SAM's rule images from Hyperion and Landsat-8, resulting that hyperspectral was more accurate than multispectral data. Hyperion SAM's rule images showed lower value compared to Landsat-8, the significant number derived from using Hyperion was about 24% better. This inferred that the hyperspectral remote sensing is preferable for mineral mapping even though vegetation covered study area.
Multispectral and hyperspectral measurements of soldier's camouflage equipment
NASA Astrophysics Data System (ADS)
Kastek, Mariusz; Piątkowski, Tadeusz; Dulski, Rafal; Chamberland, Martin; Lagueux, Philippe; Farley, Vincent
2012-06-01
In today's electro-optic warfare era, it is more than vital for one nation's defense to possess the most advanced measurement and signature intelligence (MASINT) capabilities. This is critical to gain a strategic advantage in the planning of the military operations and deployments. The thermal infrared region of the electromagnetic spectrum is a key region that is exploited for infrared reconnaissance and surveillance missions. The Military University of Technology has conducted an intensive measurement campaign of various soldier's camouflage devices in the scope of building a database of infrared signatures. One of today's key technologies required to perform signature measurements has become infrared hyperspectral and broadband/multispectral imaging sensors. The Telops Hyper-Cam LW product represents a unique commercial offering with outstanding performances and versatility for the collection of hyperspectral infrared images. The Hyper-Cam allows for the infrared imagery of a target (320 × 256 pixels) at a very high spectral resolution (down to 0.25 cm-1). Moreover, the Military University of Technology has made use of a suite of scientific grade commercial infrared cameras to further measure and assess the targets from a broadband/multispectral perspective. The experiment concept and measurement results are presented in this paper.
NASA Astrophysics Data System (ADS)
Awumah, A.; Mahanti, P.; Robinson, M. S.
2017-12-01
Image fusion is often used in Earth-based remote sensing applications to merge spatial details from a high-resolution panchromatic (Pan) image with the color information from a lower-resolution multi-spectral (MS) image, resulting in a high-resolution multi-spectral image (HRMS). Previously, the performance of six well-known image fusion methods were compared using Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) images (1). Results showed the Intensity-Hue-Saturation (IHS) method provided the best spatial performance, but deteriorated the spectral content. In general, there was a trade-off between spatial enhancement and spectral fidelity from the fusion process; the more spatial details from the Pan fused with the MS image, the more spectrally distorted the final HRMS. In this work, we control the amount of spatial details fused (from the LROC NAC images to WAC images) using a controlled IHS method (2), to investigate the spatial variation in spectral distortion on fresh crater ejecta. In the controlled IHS method (2), the percentage of the Pan component merged with the MS is varied. The percent of spatial detail from the Pan used is determined by a variable whose value may be varied between 1 (no Pan utilized) to infinity (entire Pan utilized). An HRMS color composite image (red=415nm, green=321/415nm, blue=321/360nm (3)) was used to assess performance (via visual inspection and metric-based evaluations) at each tested value of the control parameter (1 to 10—after which spectral distortion saturates—in 0.01 increments) within three regions: crater interiors, ejecta blankets, and the background material surrounding the craters. Increasing the control parameter introduced increased spatial sharpness and spectral distortion in all regions, but to varying degrees. Crater interiors suffered the most color distortion, while ejecta experienced less color distortion. The controlled IHS method is therefore desirable for resolution-enhancement of fresh crater ejecta; larger values of the control parameter may be used to sharpen MS images of ejecta patterns but with less impact to color distortion than in the uncontrolled IHS fusion process. References: (1) Prasun et. al (2016) ISPRS. (2) Choi, Myungjin (2006) IEEE. (3) Denevi et. al (2014) JGR.
Smartphone-based multispectral imaging: system development and potential for mobile skin diagnosis
Kim, Sewoong; Cho, Dongrae; Kim, Jihun; Kim, Manjae; Youn, Sangyeon; Jang, Jae Eun; Je, Minkyu; Lee, Dong Hun; Lee, Boreom; Farkas, Daniel L.; Hwang, Jae Youn
2016-01-01
We investigate the potential of mobile smartphone-based multispectral imaging for the quantitative diagnosis and management of skin lesions. Recently, various mobile devices such as a smartphone have emerged as healthcare tools. They have been applied for the early diagnosis of nonmalignant and malignant skin diseases. Particularly, when they are combined with an advanced optical imaging technique such as multispectral imaging and analysis, it would be beneficial for the early diagnosis of such skin diseases and for further quantitative prognosis monitoring after treatment at home. Thus, we demonstrate here the development of a smartphone-based multispectral imaging system with high portability and its potential for mobile skin diagnosis. The results suggest that smartphone-based multispectral imaging and analysis has great potential as a healthcare tool for quantitative mobile skin diagnosis. PMID:28018743
Compression of regions in the global advanced very high resolution radiometer 1-km data set
NASA Technical Reports Server (NTRS)
Kess, Barbara L.; Steinwand, Daniel R.; Reichenbach, Stephen E.
1994-01-01
The global advanced very high resolution radiometer (AVHRR) 1-km data set is a 10-band image produced at USGS' EROS Data Center for the study of the world's land surfaces. The image contains masked regions for non-land areas which are identical in each band but vary between data sets. They comprise over 75 percent of this 9.7 gigabyte image. The mask is compressed once and stored separately from the land data which is compressed for each of the 10 bands. The mask is stored in a hierarchical format for multi-resolution decompression of geographic subwindows of the image. The land for each band is compressed by modifying a method that ignores fill values. This multi-spectral region compression efficiently compresses the region data and precludes fill values from interfering with land compression statistics. Results show that the masked regions in a one-byte test image (6.5 Gigabytes) compress to 0.2 percent of the 557,756,146 bytes they occupy in the original image, resulting in a compression ratio of 89.9 percent for the entire image.
[A spatial adaptive algorithm for endmember extraction on multispectral remote sensing image].
Zhu, Chang-Ming; Luo, Jian-Cheng; Shen, Zhan-Feng; Li, Jun-Li; Hu, Xiao-Dong
2011-10-01
Due to the problem that the convex cone analysis (CCA) method can only extract limited endmember in multispectral imagery, this paper proposed a new endmember extraction method by spatial adaptive spectral feature analysis in multispectral remote sensing image based on spatial clustering and imagery slice. Firstly, in order to remove spatial and spectral redundancies, the principal component analysis (PCA) algorithm was used for lowering the dimensions of the multispectral data. Secondly, iterative self-organizing data analysis technology algorithm (ISODATA) was used for image cluster through the similarity of the pixel spectral. And then, through clustering post process and litter clusters combination, we divided the whole image data into several blocks (tiles). Lastly, according to the complexity of image blocks' landscape and the feature of the scatter diagrams analysis, the authors can determine the number of endmembers. Then using hourglass algorithm extracts endmembers. Through the endmember extraction experiment on TM multispectral imagery, the experiment result showed that the method can extract endmember spectra form multispectral imagery effectively. What's more, the method resolved the problem of the amount of endmember limitation and improved accuracy of the endmember extraction. The method has provided a new way for multispectral image endmember extraction.
NASA Technical Reports Server (NTRS)
Ross, Kenton W.; Russell, Jeffrey; Ryan, Robert E.
2006-01-01
The success of MODIS (the Moderate Resolution Imaging Spectrometer) in creating unprecedented, timely, high-quality data for vegetation and other studies has created great anticipation for data from VIIRS (the Visible/Infrared Imager Radiometer Suite). VIIRS will be carried onboard the joint NASA/Department of Defense/National Oceanic and Atmospheric Administration NPP (NPOESS (National Polar-orbiting Operational Environmental Satellite System) Preparatory Project). Because the VIIRS instruments will have lower spatial resolution than the current MODIS instruments 400 m versus 250 m at nadir for the channels used to generate Normalized Difference Vegetation Index data, scientists need the answer to this question: how will the change in resolution affect vegetation studies? By using simulated VIIRS measurements, this question may be answered before the VIIRS instruments are deployed in space. Using simulated VIIRS products, the U.S. Department of Agriculture and other operational agencies can then modify their decision support systems appropriately in preparation for receipt of actual VIIRS data. VIIRS simulations and validations will be based on the ART (Application Research Toolbox), an integrated set of algorithms and models developed in MATLAB(Registerd TradeMark) that enables users to perform a suite of simulations and statistical trade studies on remote sensing systems. Specifically, the ART provides the capability to generate simulated multispectral image products, at various scales, from high spatial hyperspectral and/or multispectral image products. The ART uses acquired ( real ) or synthetic datasets, along with sensor specifications, to create simulated datasets. For existing multispectral sensor systems, the simulated data products are used for comparison, verification, and validation of the simulated system s actual products. VIIRS simulations will be performed using Hyperion and MODIS datasets. The hyperspectral and hyperspatial properties of Hyperion data will be used to produce simulated MODIS and VIIRS products. Hyperion-derived MODIS data will be compared with near-coincident MODIS collects to validate both spectral and spatial synthesis, which will ascertain the accuracy of converting from MODIS to VIIRS. MODIS-derived VIIRS data is needed for global coverage and for the generation of time series for regional and global investigations. These types of simulations will have errors associated with aliasing for some scene types. This study will help quantify these errors and will identify cases where high-quality, MODIS-derived VIIRS data will be available.
NASA Technical Reports Server (NTRS)
Fischer, Erich M.; Pieters, Carle M.; Head, James W.
1992-01-01
Modern visible and near-infrared detectors are critically important for the accurate identification and relative abundance measurement of lunar minerals; however, even a very small number of well-placed visible and near-infrared bandpass channels provide a significant amount of general information about crucial lunar resources. The Galileo Solid State Imaging system (SSI) multispectral data are an important example of this. Al/Si and soil maturity will be discussed as examples of significant general lunar resource information that can be gleaned from moderate spectral resolution visible and near-infrared data with relative ease. Because quantitative-albedo data are necessary for these kinds of analyses, data such as those obtained by Galileo SSI are critical. SSI obtained synoptic digital multispectral image data for both the nearside and farside of the Moon during the first Galileo Earth-Moon encounter in December 1990. The data consist of images through seven filters with bandpasses ranging from 0.40 microns in the ultraviolet to 0.99 microns in the near-infrared. Although these data are of moderate spectral resolution, they still provide information for the following lunar resources: (1) titanium content of mature mare soils based upon the 0.40/0.56-micron (UV/VIS) ratio; (2) mafic mineral abundance based upon the 0.76/0.99-micron ratio; and (3) the maturity or exposure age of the soils based upon the 0.56-0.76-micron continuum and the 0.76/0.99-micron ratio. Within constraints, these moderate spectral resolution visible and near-infrared reflectance data can also provide elemental information such as Al/Si for mature highland soils.
NASA Astrophysics Data System (ADS)
Jayasekare, Ajith S.; Wickramasuriya, Rohan; Namazi-Rad, Mohammad-Reza; Perez, Pascal; Singh, Gaurav
2017-07-01
A continuous update of building information is necessary in today's urban planning. Digital images acquired by remote sensing platforms at appropriate spatial and temporal resolutions provide an excellent data source to achieve this. In particular, high-resolution satellite images are often used to retrieve objects such as rooftops using feature extraction. However, high-resolution images acquired over built-up areas are associated with noises such as shadows that reduce the accuracy of feature extraction. Feature extraction heavily relies on the reflectance purity of objects, which is difficult to perfect in complex urban landscapes. An attempt was made to increase the reflectance purity of building rooftops affected by shadows. In addition to the multispectral (MS) image, derivatives thereof namely, normalized difference vegetation index and principle component (PC) images were incorporated in generating the probability image. This hybrid probability image generation ensured that the effect of shadows on rooftop extraction, particularly on light-colored roofs, is largely eliminated. The PC image was also used for image segmentation, which further increased the accuracy compared to segmentation performed on an MS image. Results show that the presented method can achieve higher rooftop extraction accuracy (70.4%) in vegetation-rich urban areas compared to traditional methods.
Eliminate background interference from latent fingerprints using ultraviolet multispectral imaging
NASA Astrophysics Data System (ADS)
Huang, Wei; Xu, Xiaojing; Wang, Guiqiang
2014-02-01
Fingerprints are the most important evidence in crime scene. The technology of developing latent fingerprints is one of the hottest research areas in forensic science. Recently, multispectral imaging which has shown great capability in fingerprints development, questioned document detection and trace evidence examination is used in detecting material evidence. This paper studied how to eliminate background interference from non-porous and porous surface latent fingerprints by rotating filter wheel ultraviolet multispectral imaging. The results approved that background interference could be removed clearly from latent fingerprints by using multispectral imaging in ultraviolet bandwidth.
Selkowitz, D.J.
2010-01-01
Shrub cover appears to be increasing across many areas of the Arctic tundra biome, and increasing shrub cover in the Arctic has the potential to significantly impact global carbon budgets and the global climate system. For most of the Arctic, however, there is no existing baseline inventory of shrub canopy cover, as existing maps of Arctic vegetation provide little information about the density of shrub cover at a moderate spatial resolution across the region. Remotely-sensed fractional shrub canopy maps can provide this necessary baseline inventory of shrub cover. In this study, we compare the accuracy of fractional shrub canopy (> 0.5 m tall) maps derived from multi-spectral, multi-angular, and multi-temporal datasets from Landsat imagery at 30 m spatial resolution, Moderate Resolution Imaging SpectroRadiometer (MODIS) imagery at 250 m and 500 m spatial resolution, and MultiAngle Imaging Spectroradiometer (MISR) imagery at 275 m spatial resolution for a 1067 km2 study area in Arctic Alaska. The study area is centered at 69 ??N, ranges in elevation from 130 to 770 m, is composed primarily of rolling topography with gentle slopes less than 10??, and is free of glaciers and perennial snow cover. Shrubs > 0.5 m in height cover 2.9% of the study area and are primarily confined to patches associated with specific landscape features. Reference fractional shrub canopy is determined from in situ shrub canopy measurements and a high spatial resolution IKONOS image swath. Regression tree models are constructed to estimate fractional canopy cover at 250 m using different combinations of input data from Landsat, MODIS, and MISR. Results indicate that multi-spectral data provide substantially more accurate estimates of fractional shrub canopy cover than multi-angular or multi-temporal data. Higher spatial resolution datasets also provide more accurate estimates of fractional shrub canopy cover (aggregated to moderate spatial resolutions) than lower spatial resolution datasets, an expected result for a study area where most shrub cover is concentrated in narrow patches associated with rivers, drainages, and slopes. Including the middle infrared bands available from Landsat and MODIS in the regression tree models (in addition to the four standard visible and near-infrared spectral bands) typically results in a slight boost in accuracy. Including the multi-angular red band data available from MISR in the regression tree models, however, typically boosts accuracy more substantially, resulting in moderate resolution fractional shrub canopy estimates approaching the accuracy of estimates derived from the much higher spatial resolution Landsat sensor. Given the poor availability of snow and cloud-free Landsat scenes in many areas of the Arctic and the promising results demonstrated here by the MISR sensor, MISR may be the best choice for large area fractional shrub canopy mapping in the Alaskan Arctic for the period 2000-2009.
Fast Lossless Compression of Multispectral-Image Data
NASA Technical Reports Server (NTRS)
Klimesh, Matthew
2006-01-01
An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.
The Importance of Chaos and Lenticulae on Europa for the JIMO Mission
NASA Technical Reports Server (NTRS)
Spaun, Nicole A.
2003-01-01
The Galileo Solid State Imaging (SSI) experiment provided high-resolution images of Europa's surface allowing identification of surface features barely distinguishable at Voyager's resolution. SSI revealed the visible pitting on Europa's surface to be due to large disrupted features, chaos, and smaller sub-circular patches, lenticulae. Chaos features contain a hummocky matrix material and commonly contain dislocated blocks of ridged plains. Lenticulae are morphologically interrelated and can be divided into three classes: domes, spots, and micro-chaos. Domes are broad, upwarped features that generally do not disrupt the texture of the ridged plains. Spots are areas of low albedo that are generally smooth in texture compared to other units. Micro-chaos are disrupted features with a hummocky matrix material, resembling that observed within chaos regions. Chaos and lenticulae are ubiquitous in the SSI regional map observations, which average approximately 200 meters per pixel (m/pxl) in resolution, and appear in several of the ultra-high resolution, i.e., better than 50 m/pxl, images of Europa as well. SSI also provided a number of multi-spectral observations of chaos and lenticulae. Using this dataset we have undertaken a thorough study of the morphology, size, spacing, stratigraphy, and color of chaos and lenticulae to determine their properties and evaluate models of their formation. Geological mapping indicates that chaos and micro-chaos have a similar internal morphology of in-situ degradation suggesting that a similar process was operating during their formation. The size distribution denotes a dominant size of 4-8 km in diameter for features containing hummocky material (i.e., chaos and micro-chaos). Results indicate a dominant spacing of 15 - 36 km apart. Chaos and lenticulae are generally among the youngest features stratigraphically observed on the surface, suggesting a recent change in resurfacing style. Also, the reddish non-icy materials on Europa's surface have high concentrations in many chaos and lenticulae features. Nonetheless, a complete global map of the distribution of chaos and lenticulae is not possible with the SSI dataset. Only <20% of the surface has been imaged at 200 m/pxl or better resolution, mostly of the near-equatorial regions. Color and ultra-high-res images have much less surface coverage. Thus we suggest that full global imaging of Europa at 200 m/pxl or better resolution, preferably in multi-spectral wavelengths, should be a high priority for the JIMO mission.
NASA Astrophysics Data System (ADS)
Silverglate, Peter R.; Fort, Dennis E.
2004-01-01
CRISM (Compact Reconnaissance Imaging Spectrometer for Mars) is a hyperspectral imager that will be launched on the MRO (Mars Reconnaissance Orbiter) in August 2005. The MRO will circle Mars in a polar orbit at a nominal altitude of 325 km. The CRISM spectral range spans the ultraviolet (UV) to the mid-wave infrared (MWIR), 400 nm to 4050 nm. The instrument utilizes a Ritchey-Chretien telescope with a 2.06º field of view (FOV) to focus light on the entrance slit of a dual spectrometer. Within the spectrometer light is split by a dichroic into VNIR (visible-near infrared) (λ <= 1.05 μm) and IR (infrared) (λ >= 1.05 μm) beams. Each beam is directed into a separate modified Offner spectrometer that focuses a spectrally dispersed image of the slit onto a two dimensional focal plane (FP). The IR FP is a 640 x 480 HgCdTe area array; the VNIR FP is a 640 x 480 silicon photodiode area array. The spectral image is contiguously sampled with a 6.55 nm spectral spacing and an instantaneous field of view of 60 μradians. The orbital motion of the MRO pushbroom scans the spectrometer slit across the Martian surface, allowing the planet to be mapped in 558 spectral bands. There are four major mapping modes: A quick initial multi-spectral mapping of a major portion of the Martian surface in 59 selected spectral bands at a spatial resolution of 600 μradians (10:1 binning); an extended multi-spectral mapping of the entire Martian surface in 59 selected spectral bands at a spatial resolution of 300 μradians (5:1 binning); a high resolution Target Mode, performing hyperspectral mapping of selected targets of interest at full spatial and spectral resolution; and an atmospheric Emission Phase Function (EPF) mode for atmospheric study and correction at full spectral resolution at a spatial resolution of 300 μradians (5:1 binning). The instrument is gimbaled to allow scanning over +/-60° for the EPF and Target modes. The scanning also permits orbital motion compensation, enabling longer integration times and consequently higher signal-to-noise ratios for selected areas on the Martian surface in Target Mode.
NASA Astrophysics Data System (ADS)
Silverglate, Peter R.; Fort, Dennis E.
2003-12-01
CRISM (Compact Reconnaissance Imaging Spectrometer for Mars) is a hyperspectral imager that will be launched on the MRO (Mars Reconnaissance Orbiter) in August 2005. The MRO will circle Mars in a polar orbit at a nominal altitude of 325 km. The CRISM spectral range spans the ultraviolet (UV) to the mid-wave infrared (MWIR), 400 nm to 4050 nm. The instrument utilizes a Ritchey-Chretien telescope with a 2.06º field of view (FOV) to focus light on the entrance slit of a dual spectrometer. Within the spectrometer light is split by a dichroic into VNIR (visible-near infrared) (λ <= 1.05 μm) and IR (infrared) (λ >= 1.05 μm) beams. Each beam is directed into a separate modified Offner spectrometer that focuses a spectrally dispersed image of the slit onto a two dimensional focal plane (FP). The IR FP is a 640 x 480 HgCdTe area array; the VNIR FP is a 640 x 480 silicon photodiode area array. The spectral image is contiguously sampled with a 6.55 nm spectral spacing and an instantaneous field of view of 60 μradians. The orbital motion of the MRO pushbroom scans the spectrometer slit across the Martian surface, allowing the planet to be mapped in 558 spectral bands. There are four major mapping modes: A quick initial multi-spectral mapping of a major portion of the Martian surface in 59 selected spectral bands at a spatial resolution of 600 μradians (10:1 binning); an extended multi-spectral mapping of the entire Martian surface in 59 selected spectral bands at a spatial resolution of 300 μradians (5:1 binning); a high resolution Target Mode, performing hyperspectral mapping of selected targets of interest at full spatial and spectral resolution; and an atmospheric Emission Phase Function (EPF) mode for atmospheric study and correction at full spectral resolution at a spatial resolution of 300 μradians (5:1 binning). The instrument is gimbaled to allow scanning over +/-60° for the EPF and Target modes. The scanning also permits orbital motion compensation, enabling longer integration times and consequently higher signal-to-noise ratios for selected areas on the Martian surface in Target Mode.
Human visual system consistent quality assessment for remote sensing image fusion
NASA Astrophysics Data System (ADS)
Liu, Jun; Huang, Junyi; Liu, Shuguang; Li, Huali; Zhou, Qiming; Liu, Junchen
2015-07-01
Quality assessment for image fusion is essential for remote sensing application. Generally used indices require a high spatial resolution multispectral (MS) image for reference, which is not always readily available. Meanwhile, the fusion quality assessments using these indices may not be consistent with the Human Visual System (HVS). As an attempt to overcome this requirement and inconsistency, this paper proposes an HVS-consistent image fusion quality assessment index at the highest resolution without a reference MS image using Gaussian Scale Space (GSS) technology that could simulate the HVS. The spatial details and spectral information of original and fused images are first separated in GSS, and the qualities are evaluated using the proposed spatial and spectral quality index respectively. The overall quality is determined without a reference MS image by a combination of the proposed two indices. Experimental results on various remote sensing images indicate that the proposed index is more consistent with HVS evaluation compared with other widely used indices that may or may not require reference images.
Sentinel-2A image quality commissioning phase final results: geometric calibration and performances
NASA Astrophysics Data System (ADS)
Languille, F.; Gaudel, A.; Dechoz, C.; Greslou, D.; de Lussy, F.; Trémas, T.; Poulain, V.; Massera, S.
2016-10-01
In the frame of the Copernicus program of the European Commission, Sentinel-2 offers multispectral high-spatial-resolution optical images over global terrestrial surfaces. In cooperation with ESA, the Centre National d'Etudes Spatiales (CNES) is in charge of the image quality of the project, and so ensures the CAL/VAL commissioning phase during the months following the launch. Sentinel-2 is a constellation of 2 satellites on a polar sun-synchronous orbit with a revisit time of 5 days (with both satellites), a high field of view - 290km, 13 spectral bands in visible and shortwave infrared, and high spatial resolution - 10m, 20m and 60m. The Sentinel-2 mission offers a global coverage over terrestrial surfaces. The satellites acquire systematically terrestrial surfaces under the same viewing conditions in order to have temporal images stacks. The first satellite was launched in June 2015. Following the launch, the CAL/VAL commissioning phase is then lasting during 6 months for geometrical calibration. This paper will point on observations and results seen on Sentinel-2 images during commissioning phase. It will provide explanations about Sentinel-2 products delivered with geometric corrections. This paper will detail calibration sites, and the methods used for geometrical parameters calibration and will present linked results. The following topics will be presented: viewing frames orientation assessment, focal plane mapping for all spectral bands, results on geolocation assessment, and multispectral registration. There is a systematic images recalibration over a same reference which is a set of S2 images produced during the 6 months of CAL/VAL. This set of images will be presented as well as the geolocation performance and the multitemporal performance after refining over this ground reference.
Earth mapping - aerial or satellite imagery comparative analysis
NASA Astrophysics Data System (ADS)
Fotev, Svetlin; Jordanov, Dimitar; Lukarski, Hristo
Nowadays, solving the tasks for revision of existing map products and creation of new maps requires making a choice of the land cover image source. The issue of the effectiveness and cost of the usage of aerial mapping systems versus the efficiency and cost of very-high resolution satellite imagery is topical [1, 2, 3, 4]. The price of any remotely sensed image depends on the product (panchromatic or multispectral), resolution, processing level, scale, urgency of task and on whether the needed image is available in the archive or has to be requested. The purpose of the present work is: to make a comparative analysis between the two approaches for mapping the Earth having in mind two parameters: quality and cost. To suggest an approach for selection of the map information sources - airplane-based or spacecraft-based imaging systems with very-high spatial resolution. Two cases are considered: area that equals approximately one satellite scene and area that equals approximately the territory of Bulgaria.
NASA Astrophysics Data System (ADS)
Fernandes, Maria Rosário; Aguiar, Francisca C.; Silva, João M. N.; Ferreira, Maria Teresa; Pereira, José M. C.
2014-10-01
Giant reed is an aggressive invasive plant of riparian ecosystems in many sub-tropical and warm-temperate regions, including Mediterranean Europe. In this study we tested a set of geometric, spectral and textural attributes in an object based image analysis (OBIA) approach to map giant reed invasions in riparian habitats. Bagging Classification and Regression Tree were used to select the optimal attributes and to build the classification rules sets. Mapping accuracy was performed using landscape metrics and the Kappa coefficient to compare the topographical and geometric similarity between the giant reed patches obtained with the OBIA map and with a validation map derived from on-screen digitizing. The methodology was applied in two high spatial resolution images: an airborne multispectral imagery and the newly WorldView-2 imagery. A temporal coverage of the airborne multispectral images was radiometrically calibrated with the IR-Mad transformation and used to assess the influence of the phenological variability of the invader. We found that optimal attributes for giant reed OBIA detection are a combination of spectral, geometric and textural information, with different scoring selection depending on the spectral and spatial characteristics of the imagery. WorldView-2 showed higher mapping accuracy (Kappa coefficient of 77%) and spectral attributes, including the newly yellow band, were preferentially selected, although a tendency to overestimate the total invaded area, due to the low spatial resolution (2 m of pixel size vs. 50 cm) was observed. When airborne images were used, geometric attributes were primarily selected and a higher spatial detail of the invasive patches was obtained, due to the higher spatial resolution. However, in highly heterogeneous landscapes, the low spectral resolution of the airborne images (4 bands instead of the 8 of WorldView-2) reduces the capability to detect giant reed patches. Giant reed displays peculiar spectral and geometric traits, at leaf, canopy and stand level, which makes the OBIA approach a very suitable technique for management purposes.
NASA Astrophysics Data System (ADS)
Svejkovsky, Jan; Nezlin, Nikolay P.; Mustain, Neomi M.; Kum, Jamie B.
2010-04-01
Spatial-temporal characteristics and environmental factors regulating the behavior of stormwater runoff from the Tijuana River in southern California were analyzed utilizing very high resolution aerial imagery, and time-coincident environmental and bacterial sampling data. Thirty nine multispectral aerial images with 2.1-m spatial resolution were collected after major rainstorms during 2003-2008. Utilizing differences in color reflectance characteristics, the ocean surface was classified into non-plume waters and three components of the runoff plume reflecting differences in age and suspended sediment concentrations. Tijuana River discharge rate was the primary factor regulating the size of the freshest plume component and its shorelong extensions to the north and south. Wave direction was found to affect the shorelong distribution of the shoreline-connected fresh plume components much more strongly than wind direction. Wave-driven sediment resuspension also significantly contributed to the size of the oldest plume component. Surf zone bacterial samples collected near the time of each image acquisition were used to evaluate the contamination characteristics of each plume component. The bacterial contamination of the freshest plume waters was very high (100% of surf zone samples exceeded California standards), but the oldest plume areas were heterogeneous, including both polluted and clean waters. The aerial imagery archive allowed study of river runoff characteristics on a plume component level, not previously done with coarser satellite images. Our findings suggest that high resolution imaging can quickly identify the spatial extents of the most polluted runoff but cannot be relied upon to always identify the entire polluted area. Our results also indicate that wave-driven transport is important in distributing the most contaminated plume areas along the shoreline.
NASA Astrophysics Data System (ADS)
Hayduk, Robert J.; Scott, Walter S.; Walberg, Gerald D.; Butts, James J.; Starr, Richard D.
1997-01-01
The Small Satellite Technology Initiative (SSTI) is a National Aeronautics and Space Administration (NASA) program to demonstrate smaller, high technology satellites constructed rapidly and less expensively. Under SSTI, NASA funded the development of ``Clark,'' a high technology demonstration satellite to provide 3-m resolution panchromatic and 15-m resolution multispectral images, as well as collect atmospheric constituent and cosmic x-ray data. The 690-lb. satellite, to be launched in early 1997, will be in a 476 km, circular, sun-synchronous polar orbit. This paper describes the program objectives, the technical characteristics of the sensors and satellite, image processing, archiving and distribution. Data archiving and distribution will be performed by NASA Stennis Space Center and by the EROS Data Center, Sioux Falls, South Dakota, USA.
SIRF: Simultaneous Satellite Image Registration and Fusion in a Unified Framework.
Chen, Chen; Li, Yeqing; Liu, Wei; Huang, Junzhou
2015-11-01
In this paper, we propose a novel method for image fusion with a high-resolution panchromatic image and a low-resolution multispectral (Ms) image at the same geographical location. The fusion is formulated as a convex optimization problem which minimizes a linear combination of a least-squares fitting term and a dynamic gradient sparsity regularizer. The former is to preserve accurate spectral information of the Ms image, while the latter is to keep sharp edges of the high-resolution panchromatic image. We further propose to simultaneously register the two images during the fusing process, which is naturally achieved by virtue of the dynamic gradient sparsity property. An efficient algorithm is then devised to solve the optimization problem, accomplishing a linear computational complexity in the size of the output image in each iteration. We compare our method against six state-of-the-art image fusion methods on Ms image data sets from four satellites. Extensive experimental results demonstrate that the proposed method substantially outperforms the others in terms of both spatial and spectral qualities. We also show that our method can provide high-quality products from coarsely registered real-world IKONOS data sets. Finally, a MATLAB implementation is provided to facilitate future research.
NASA Astrophysics Data System (ADS)
Nijland, W.; Coops, N. C.; Nielsen, S. E.; Stenhouse, G.
2015-06-01
Wildlife habitat selection is determined by a wide range of factors including food availability, shelter, security and landscape heterogeneity all of which are closely related to the more readily mapped landcover types and disturbance regimes. Regional wildlife habitat studies often used moderate resolution multispectral satellite imagery for wall to wall mapping, because it offers a favourable mix of availability, cost and resolution. However, certain habitat characteristics such as canopy structure and topographic factors are not well discriminated with these passive, optical datasets. Airborne laser scanning (ALS) provides highly accurate three dimensional data on canopy structure and the underlying terrain, thereby offers significant enhancements to wildlife habitat mapping. In this paper, we introduce an approach to integrate ALS data and multispectral images to develop a new heuristic wildlife habitat classifier for western Alberta. Our method combines ALS direct measures of canopy height, and cover with optical estimates of species (conifer vs. deciduous) composition into a decision tree classifier for habitat - or landcover types. We believe this new approach is highly versatile and transferable, because class rules can be easily adapted for other species or functional groups. We discuss the implications of increased ALS availability for habitat mapping and wildlife management and provide recommendations for integrating multispectral and ALS data into wildlife management.
High Spatial Resolution Thermal Satellite Technologies
NASA Technical Reports Server (NTRS)
Ryan, Robert
2003-01-01
This document in the form of viewslides, reviews various low-cost alternatives to high spatial resolution thermal satellite technologies. There exists no follow-on to Landsat 7 or ASTER high spatial resolution thermal systems. This document reviews the results of the investigation in to the use of new technologies to create a low-cost useful alternative. Three suggested technologies are examined. 1. Conventional microbolometer pushbroom modes offers potential for low cost Landsat Data Continuity Mission (LDCM) thermal or ASTER capability with at least 60-120 ground sampling distance (GSD). 2. Backscanning could produce MultiSpectral Thermal Imager performance without cooled detectors. 3. Cooled detector could produce hyperspectral thermal class system or extremely high spatial resolution class instrument.
Acquisition performance of LAPAN-A3/IPB multispectral imager in real-time mode of operation
NASA Astrophysics Data System (ADS)
Hakim, P. R.; Permala, R.; Jayani, A. P. S.
2018-05-01
LAPAN-A3/IPB satellite was launched in June 2016 and its multispectral imager has been producing Indonesian coverage images. In order to improve its support for remote sensing application, the imager should produce images with high quality and quantity. To improve the quantity of LAPAN-A3/IPB multispectral image captured, image acquisition could be executed in real-time mode from LAPAN ground station in Bogor when the satellite passes west Indonesia region. This research analyses the performance of LAPAN-A3/IPB multispectral imager acquisition in real-time mode, in terms of image quality and quantity, under assumption of several on-board and ground segment limitations. Results show that with real-time operation mode, LAPAN-A3/IPB multispectral imager could produce twice as much as image coverage compare to recorded mode. However, the images produced in real-time mode will have slightly degraded quality due to image compression process involved. Based on several analyses that have been done in this research, it is recommended to use real-time acquisition mode whenever it possible, unless for some circumstances that strictly not allow any quality degradation of the images produced.
Initial clinical testing of a multi-spectral imaging system built on a smartphone platform
NASA Astrophysics Data System (ADS)
Mink, Jonah W.; Wexler, Shraga; Bolton, Frank J.; Hummel, Charles; Kahn, Bruce S.; Levitz, David
2016-03-01
Multi-spectral imaging systems are often expensive and bulky. An innovative multi-spectral imaging system was fitted onto a mobile colposcope, an imaging system built around a smartphone in order to image the uterine cervix from outside the body. The multi-spectral mobile colposcope (MSMC) acquires images at different wavelengths. This paper presents the clinical testing of MSMC imaging (technical validation of the MSMC system is described elsewhere 1 ). Patients who were referred to colposcopy following abnormal screening test (Pap or HPV DNA test) according to the standard of care were enrolled. Multi-spectral image sets of the cervix were acquired, consisting of images from the various wavelengths. Image acquisition took 1-2 sec. Areas suspected for dysplasia under white light imaging were biopsied, according to the standard of care. Biopsied sites were recorded on a clockface map of the cervix. Following the procedure, MSMC data was processed from the sites of biopsied sites. To date, the initial histopathological results are still outstanding. Qualitatively, structures in the cervical images were sharper at lower wavelengths than higher wavelengths. Patients tolerated imaging well. The result suggests MSMC holds promise for cervical imaging.
[Detecting fire smoke based on the multispectral image].
Wei, Ying-Zhuo; Zhang, Shao-Wu; Liu, Yan-Wei
2010-04-01
Smoke detection is very important for preventing forest-fire in the fire early process. Because the traditional technologies based on video and image processing are easily affected by the background dynamic information, three limitations exist in these technologies, i. e. lower anti-interference ability, higher false detection rate and the fire smoke and water fog being not easily distinguished. A novel detection method for detecting smoke based on the multispectral image was proposed in the present paper. Using the multispectral digital imaging technique, the multispectral image series of fire smoke and water fog were obtained in the band scope of 400 to 720 nm, and the images were divided into bins. The Euclidian distance among the bins was taken as a measurement for showing the difference of spectrogram. After obtaining the spectral feature vectors of dynamic region, the regions of fire smoke and water fog were extracted according to the spectrogram feature difference between target and background. The indoor and outdoor experiments show that the smoke detection method based on multispectral image can be applied to the smoke detection, which can effectively distinguish the fire smoke and water fog. Combined with video image processing method, the multispectral image detection method can also be applied to the forest fire surveillance, reducing the false alarm rate in forest fire detection.
Multispectral Palmprint Recognition Using a Quaternion Matrix
Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng
2012-01-01
Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%. PMID:22666049
Multispectral palmprint recognition using a quaternion matrix.
Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng
2012-01-01
Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%.
MOVING BEYOND COLOR: THE CASE FOR MULTISPECTRAL IMAGING IN BRIGHTFIELD PATHOLOGY
Cukierski, William J.; Qi, Xin; Foran, David J.
2009-01-01
A multispectral camera is capable of imaging a histologic slide at narrow bandwidths over the range of the visible spectrum. While several uses for multispectral imaging (MSI) have been demonstrated in pathology [1, 2], there is no unified consensus over when and how MSI might benefit automated analysis [3, 4]. In this work, we use a linear-algebra framework to investigate the relationship between the spectral image and its standard-image counterpart. The multispectral “cube” is treated as an extension of a traditional image in a high-dimensional color space. The concept of metamers is introduced and used to derive regions of the visible spectrum where MSI may provide an advantage. Furthermore, histological stains which are amenable to analysis by MSI are reported. We show the Commission internationale de l’éclairage (CIE) 1931 transformation from spectrum to color is non-neighborhood preserving. Empirical results are demonstrated on multispectral images of peripheral blood smears. PMID:19997528
Object Manifold Alignment for Multi-Temporal High Resolution Remote Sensing Images Classification
NASA Astrophysics Data System (ADS)
Gao, G.; Zhang, M.; Gu, Y.
2017-05-01
Multi-temporal remote sensing images classification is very useful for monitoring the land cover changes. Traditional approaches in this field mainly face to limited labelled samples and spectral drift of image information. With spatial resolution improvement, "pepper and salt" appears and classification results will be effected when the pixelwise classification algorithms are applied to high-resolution satellite images, in which the spatial relationship among the pixels is ignored. For classifying the multi-temporal high resolution images with limited labelled samples, spectral drift and "pepper and salt" problem, an object-based manifold alignment method is proposed. Firstly, multi-temporal multispectral images are cut to superpixels by simple linear iterative clustering (SLIC) respectively. Secondly, some features obtained from superpixels are formed as vector. Thirdly, a majority voting manifold alignment method aiming at solving high resolution problem is proposed and mapping the vector data to alignment space. At last, all the data in the alignment space are classified by using KNN method. Multi-temporal images from different areas or the same area are both considered in this paper. In the experiments, 2 groups of multi-temporal HR images collected by China GF1 and GF2 satellites are used for performance evaluation. Experimental results indicate that the proposed method not only has significantly outperforms than traditional domain adaptation methods in classification accuracy, but also effectively overcome the problem of "pepper and salt".
Multispectral glancing incidence X-ray telescope
NASA Technical Reports Server (NTRS)
Hoover, Richard B. (Inventor)
1990-01-01
A multispectral glancing incidence X-ray telescope is illustrated capable of broadband, high-resolution imaging of solar and stellar X-ray and extreme ultraviolet radiation sources which includes a primary optical system preferably of the Wolter I type having a primary mirror system (20, 22). The primary optical system further includes an optical axis (24) having a primary focus (F1) at which the incoming radiation is focused by the primary mirrors. A plurality of ellipsoidal mirrors (30a, 30b, 30cand 30d) are carried at an inclination to the optical axis behind the primary focus (F1). A rotating carrier (32) is provided on which the ellipsoidal mirrors are carried so that a desired one of the ellipsoidal mirrors may be selectively positioned in front of the incoming radiation beam (26). In the preferred embodiment, each of the ellipsoidal mirrors has an identical concave surface carrying a layered synthetic microstructure coating tailored to reflect a desired wavelength of 1.5 .ANG. or longer. Each of the identical ellipsoidal mirrors has a second focus (F2) at which a detector (16) is carried. Thus the different wavelength image is focused upon the detector irregardless of which mirror is positioned in front of the radiation beam. In this manner, a plurality of low wavelengths in a wavelength band generally less than 30 angstroms can be imaged with a high resolution.
Andrew T. Hudak; Jeffrey S. Evans; Michael J. Falkowski; Nicholas L. Crookston; Paul E. Gessler; Penelope Morgan; Alistair M. S. Smith
2005-01-01
Multispectral satellite imagery are appealing for their relatively low cost, and have demonstrated utility at the landscape level, but are typically limited at the stand level by coarse resolution and insensitivity to variation in vertical canopy structure. In contrast, lidar data are less affected by these difficulties, and provide high structural detail, but are less...
Detection of particulate air pollution plumes from major point sources using ERTS-1 imagery
NASA Technical Reports Server (NTRS)
Lyons, W. A.; Pease, S. R.
1973-01-01
The Earth Resources Technology Satellite (ERTS-1) launched by NASA in July 1972 has been providing thousands of high resolution multispectral images of interest to geographers, cartographers, hydrologists, and agroculturists. It has been found possible to detect the long-range (over 50 km) transport of suspected particulate plumes from the Chicago-Gary steel mill complex over Lake Michigan. The observed plumes are readily related to known steel mills, a cement plant, refineries, and fossil-fuel power plants. This has important ramifications when discussing the interregional transport of atmospheric pollutants. Analysis reveals that the Multispectral Scanner Band 5 (0.6 to 0.7 micrometer) provides the best overall contrast between the smoke and the underlying water surface.
Forest cover type analysis of New England forests using innovative WorldView-2 imagery
NASA Astrophysics Data System (ADS)
Kovacs, Jenna M.
For many years, remote sensing has been used to generate land cover type maps to create a visual representation of what is occurring on the ground. One significant use of remote sensing is the identification of forest cover types. New England forests are notorious for their especially complex forest structure and as a result have been, and continue to be, a challenge when classifying forest cover types. To most accurately depict forest cover types occurring on the ground, it is essential to utilize image data that have a suitable combination of both spectral and spatial resolution. The WorldView-2 (WV2) commercial satellite, launched in 2009, is the first of its kind, having both high spectral and spatial resolutions. WV2 records eight bands of multispectral imagery, four more than the usual high spatial resolution sensors, and has a pixel size of 1.85 meters at the nadir. These additional bands have the potential to improve classification detail and classification accuracy of forest cover type maps. For this reason, WV2 imagery was utilized on its own, and in combination with Landsat 5 TM (LS5) multispectral imagery, to evaluate whether these image data could more accurately classify forest cover types. In keeping with recent developments in image analysis, an Object-Based Image Analysis (OBIA) approach was used to segment images of Pawtuckaway State Park and nearby private lands, an area representative of the typical complex forest structure found in the New England region. A Classification and Regression Tree (CART) analysis was then used to classify image segments at two levels of classification detail. Accuracies for each forest cover type map produced were generated using traditional and area-based error matrices, and additional standard accuracy measures (i.e., KAPPA) were generated. The results from this study show that there is value in analyzing imagery with both high spectral and spatial resolutions, and that WV2's new and innovative bands can be useful for the classification of complex forest structures.
A multispectral imaging approach for diagnostics of skin pathologies
NASA Astrophysics Data System (ADS)
Lihacova, Ilze; Derjabo, Aleksandrs; Spigulis, Janis
2013-06-01
Noninvasive multispectral imaging method was applied for different skin pathology such as nevus, basal cell carcinoma, and melanoma diagnostics. Developed melanoma diagnostic parameter, using three spectral bands (540 nm, 650 nm and 950 nm), was calculated for nevus, melanoma and basal cell carcinoma. Simple multispectral diagnostic device was established and applied for skin assessment. Development and application of multispectral diagnostics method described further in this article.
EO-1 analysis applicable to coastal characterization
NASA Astrophysics Data System (ADS)
Burke, Hsiao-hua K.; Misra, Bijoy; Hsu, Su May; Griffin, Michael K.; Upham, Carolyn; Farrar, Kris
2003-09-01
The EO-1 satellite is part of NASA's New Millennium Program (NMP). It consists of three imaging sensors: the multi-spectral Advanced Land Imager (ALI), Hyperion and Atmospheric Corrector. Hyperion provides a high-resolution hyperspectral imager capable of resolving 220 spectral bands (from 0.4 to 2.5 micron) with a 30 m resolution. The instrument images a 7.5 km by 100 km land area per image. Hyperion is currently the only space-borne HSI data source since the launch of EO-1 in late 2000. The discussion begins with the unique capability of hyperspectral sensing to coastal characterization: (1) most ocean feature algorithms are semi-empirical retrievals and HSI has all spectral bands to provide legacy with previous sensors and to explore new information, (2) coastal features are more complex than those of deep ocean that coupled effects are best resolved with HSI, and (3) with contiguous spectral coverage, atmospheric compensation can be done with more accuracy and confidence, especially since atmospheric aerosol effects are the most pronounced in the visible region where coastal feature lie. EO-1 data from Chesapeake Bay from 19 February 2002 are analyzed. In this presentation, it is first illustrated that hyperspectral data inherently provide more information for feature extraction than multispectral data despite Hyperion has lower SNR than ALI. Chlorophyll retrievals are also shown. The results compare favorably with data from other sources. The analysis illustrates the potential value of Hyperion (and HSI in general) data to coastal characterization. Future measurement requirements (air borne and space borne) are also discussed.
Multispectral imaging for biometrics
NASA Astrophysics Data System (ADS)
Rowe, Robert K.; Corcoran, Stephen P.; Nixon, Kristin A.; Ostrom, Robert E.
2005-03-01
Automated identification systems based on fingerprint images are subject to two significant types of error: an incorrect decision about the identity of a person due to a poor quality fingerprint image and incorrectly accepting a fingerprint image generated from an artificial sample or altered finger. This paper discusses the use of multispectral sensing as a means to collect additional information about a finger that significantly augments the information collected using a conventional fingerprint imager based on total internal reflectance. In the context of this paper, "multispectral sensing" is used broadly to denote a collection of images taken under different polarization conditions and illumination configurations, as well as using multiple wavelengths. Background information is provided on conventional fingerprint imaging. A multispectral imager for fingerprint imaging is then described and a means to combine the two imaging systems into a single unit is discussed. Results from an early-stage prototype of such a system are shown.
NASA Astrophysics Data System (ADS)
Rau, J.-Y.; Jhan, J.-P.; Huang, C.-Y.
2015-08-01
Miniature Multiple Camera Array (MiniMCA-12) is a frame-based multilens/multispectral sensor composed of 12 lenses with narrow band filters. Due to its small size and light weight, it is suitable to mount on an Unmanned Aerial System (UAS) for acquiring high spectral, spatial and temporal resolution imagery used in various remote sensing applications. However, due to its wavelength range is only 10 nm that results in low image resolution and signal-to-noise ratio which are not suitable for image matching and digital surface model (DSM) generation. In the meantime, the spectral correlation among all 12 bands of MiniMCA images are low, it is difficult to perform tie-point matching and aerial triangulation at the same time. In this study, we thus propose the use of a DSLR camera to assist automatic aerial triangulation of MiniMCA-12 imagery and to produce higher spatial resolution DSM for MiniMCA12 ortho-image generation. Depending on the maximum payload weight of the used UAS, these two kinds of sensors could be collected at the same time or individually. In this study, we adopt a fixed-wing UAS to carry a Canon EOS 5D Mark2 DSLR camera and a MiniMCA-12 multi-spectral camera. For the purpose to perform automatic aerial triangulation between a DSLR camera and the MiniMCA-12, we choose one master band from MiniMCA-12 whose spectral range has overlap with the DSLR camera. However, all lenses of MiniMCA-12 have different perspective centers and viewing angles, the original 12 channels have significant band misregistration effect. Thus, the first issue encountered is to reduce the band misregistration effect. Due to all 12 MiniMCA lenses being frame-based, their spatial offsets are smaller than 15 cm and all images are almost 98% overlapped, we thus propose a modified projective transformation (MPT) method together with two systematic error correction procedures to register all 12 bands of imagery on the same image space. It means that those 12 bands of images acquired at the same exposure time will have same interior orientation parameters (IOPs) and exterior orientation parameters (EOPs) after band-to-band registration (BBR). Thus, in the aerial triangulation stage, the master band of MiniMCA-12 was treated as a reference channel to link with DSLR RGB images. It means, all reference images from the master band of MiniMCA-12 and all RGB images were triangulated at the same time with same coordinate system of ground control points (GCP). Due to the spatial resolution of RGB images is higher than the MiniMCA-12, the GCP can be marked on the RGB images only even they cannot be recognized on the MiniMCA images. Furthermore, a one meter gridded digital surface model (DSM) is created by the RGB images and applied to the MiniMCA imagery for ortho-rectification. Quantitative error analyses show that the proposed BBR scheme can achieve 0.33 pixels of average misregistration residuals length and the co-registration errors among 12 MiniMCA ortho-images and between MiniMCA and Canon RGB ortho-images are all less than 0.6 pixels. The experimental results demonstrate that the proposed method is robust, reliable and accurate for future remote sensing applications.
Chander, G.; Scaramuzza, P.L.
2006-01-01
Increasingly, data from multiple sensors are used to gain a more complete understanding of land surface processes at a variety of scales. The Landsat suite of satellites has collected the longest continuous archive of multispectral data. The ResourceSat-1 Satellite (also called as IRS-P6) was launched into the polar sunsynchronous orbit on Oct 17, 2003. It carries three remote sensing sensors: the High Resolution Linear Imaging Self-Scanner (LISS-IV), Medium Resolution Linear Imaging Self-Scanner (LISS-III), and the Advanced Wide Field Sensor (AWiFS). These three sensors are used together to provide images with different resolution and coverage. To understand the absolute radiometric calibration accuracy of IRS-P6 AWiFS and LISS-III sensors, image pairs from these sensors were compared to the Landsat-5 TM and Landsat-7 ETM+ sensors. The approach involved the calibration of nearly simultaneous surface observations based on image statistics from areas observed simultaneously by the two sensors.
A new multi-spectral feature level image fusion method for human interpretation
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-03-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
Auroral Observations from the POLAR Ultraviolet Imager (UVI)
NASA Technical Reports Server (NTRS)
Germany, G. A.; Spann, J. F.; Parks, G. K.; Brittnacher, M. J.; Elsen, R.; Chen, L.; Lummerzheim, D.; Rees, M. H.
1998-01-01
Because of the importance of the auroral regions as a remote diagnostic of near-Earth plasma processes and magnetospheric structure, spacebased instrumentation for imaging the auroral regions have been designed and operated for the last twenty-five years. The latest generation of imagers, including those flown on the POLAR satellite, extends this quest for multispectral resolution by providing three separate imagers for the visible, ultraviolet, and X ray images of the aurora. The ability to observe extended regions allows imaging missions to significantly extend the observations available from in situ or groundbased instrumentation. The complementary nature of imaging and other observations is illustrated below using results from tile GGS Ultraviolet Imager (UVI). Details of the requisite energy and intensity analysis are also presented.
X ray imaging microscope for cancer research
NASA Technical Reports Server (NTRS)
Hoover, Richard B.; Shealy, David L.; Brinkley, B. R.; Baker, Phillip C.; Barbee, Troy W., Jr.; Walker, Arthur B. C., Jr.
1991-01-01
The NASA technology employed during the Stanford MSFC LLNL Rocket X Ray Spectroheliograph flight established that doubly reflecting, normal incidence multilayer optics can be designed, fabricated, and used for high resolution x ray imaging of the Sun. Technology developed as part of the MSFC X Ray Microscope program, showed that high quality, high resolution multilayer x ray imaging microscopes are feasible. Using technology developed at Stanford University and at the DOE Lawrence Livermore National Laboratory (LLNL), Troy W. Barbee, Jr. has fabricated multilayer coatings with near theoretical reflectivities and perfect bandpass matching for a new rocket borne solar observatory, the Multi-Spectral Solar Telescope Array (MSSTA). Advanced Flow Polishing has provided multilayer mirror substrates with sub-angstrom (rms) smoothnesss for the astronomical x ray telescopes and x ray microscopes. The combination of these important technological advancements has paved the way for the development of a Water Window Imaging X Ray Microscope for cancer research.
IIPImage: Large-image visualization
NASA Astrophysics Data System (ADS)
Pillay, Ruven
2014-08-01
IIPImage is an advanced high-performance feature-rich image server system that enables online access to full resolution floating point (as well as other bit depth) images at terabyte scales. Paired with the VisiOmatic (ascl:1408.010) celestial image viewer, the system can comfortably handle gigapixel size images as well as advanced image features such as both 8, 16 and 32 bit depths, CIELAB colorimetric images and scientific imagery such as multispectral images. Streaming is tile-based, which enables viewing, navigating and zooming in real-time around gigapixel size images. Source images can be in either TIFF or JPEG2000 format. Whole images or regions within images can also be rapidly and dynamically resized and exported by the server from a single source image without the need to store multiple files in various sizes.
Tissue classification for laparoscopic image understanding based on multispectral texture analysis
NASA Astrophysics Data System (ADS)
Zhang, Yan; Wirkert, Sebastian J.; Iszatt, Justin; Kenngott, Hannes; Wagner, Martin; Mayer, Benjamin; Stock, Christian; Clancy, Neil T.; Elson, Daniel S.; Maier-Hein, Lena
2016-03-01
Intra-operative tissue classification is one of the prerequisites for providing context-aware visualization in computer-assisted minimally invasive surgeries. As many anatomical structures are difficult to differentiate in conventional RGB medical images, we propose a classification method based on multispectral image patches. In a comprehensive ex vivo study we show (1) that multispectral imaging data is superior to RGB data for organ tissue classification when used in conjunction with widely applied feature descriptors and (2) that combining the tissue texture with the reflectance spectrum improves the classification performance. Multispectral tissue analysis could thus evolve as a key enabling technique in computer-assisted laparoscopy.
Interactive digital image manipulation system
NASA Technical Reports Server (NTRS)
Henze, J.; Dezur, R.
1975-01-01
The system is designed for manipulation, analysis, interpretation, and processing of a wide variety of image data. LANDSAT (ERTS) and other data in digital form can be input directly into the system. Photographic prints and transparencies are first converted to digital form with an on-line high-resolution microdensitometer. The system is implemented on a Hewlett-Packard 3000 computer with 128 K bytes of core memory and a 47.5 megabyte disk. It includes a true color display monitor, with processing memories, graphics overlays, and a movable cursor. Image data formats are flexible so that there is no restriction to a given set of remote sensors. Conversion between data types is available to provide a basis for comparison of the various data. Multispectral data is fully supported, and there is no restriction on the number of dimensions. In this way multispectral data collected at more than one point in time may simply be treated as a data collected with twice (three times, etc.) the number of sensors. There are various libraries of functions available to the user: processing functions, display functions, system functions, and earth resources applications functions.
Clancy, Neil T.; Stoyanov, Danail; James, David R. C.; Di Marco, Aimee; Sauvage, Vincent; Clark, James; Yang, Guang-Zhong; Elson, Daniel S.
2012-01-01
Sequential multispectral imaging is an acquisition technique that involves collecting images of a target at different wavelengths, to compile a spectrum for each pixel. In surgical applications it suffers from low illumination levels and motion artefacts. A three-channel rigid endoscope system has been developed that allows simultaneous recording of stereoscopic and multispectral images. Salient features on the tissue surface may be tracked during the acquisition in the stereo cameras and, using multiple camera triangulation techniques, this information used to align the multispectral images automatically even though the tissue or camera is moving. This paper describes a detailed validation of the set-up in a controlled experiment before presenting the first in vivo use of the device in a porcine minimally invasive surgical procedure. Multispectral images of the large bowel were acquired and used to extract the relative concentration of haemoglobin in the tissue despite motion due to breathing during the acquisition. Using the stereoscopic information it was also possible to overlay the multispectral information on the reconstructed 3D surface. This experiment demonstrates the ability of this system for measuring blood perfusion changes in the tissue during surgery and its potential use as a platform for other sequential imaging modalities. PMID:23082296
NASA's small spacecraft technology initiative _Clark_ spacecraft
NASA Astrophysics Data System (ADS)
Hayduk, Robert J.; Scott, Walter S.; Walberg, Gerald D.; Butts, James J.; Starr, Richard D.
1996-11-01
The Small Satellite Technology Initiative (SSTI) is a National Aeronautics and Space Administration (NASA) program to demonstrate smaller, high technology satellites constructed rapidly and less expensively. Under SSTI, NASA funded the development of "Clark," a high technology demonstration satellite to provide 3-m resolution panchromatic and 15-m resolution multispectral images, as well as collect atmospheric constituent and cosmic x-ray data. The 690-Ib. satellite, to be launched in early 1997, will be in a 476 km, circular, sun-synchronous polar orbit. This paper describes the program objectives, the technical characteristics of the sensors and satellite, image processing, archiving and distribution. Data archiving and distribution will be performed by NASA Stennis Space Center and by the EROS Data Center, Sioux Falls, South Dakota, USA.
NASA Astrophysics Data System (ADS)
Kemper, Björn; Kastl, Lena; Schnekenburger, Jürgen; Ketelhut, Steffi
2018-02-01
Main restrictions of using laser light in digital holographic microscopy (DHM) are coherence induced noise and parasitic reflections in the experimental setup which limit resolution and measurement accuracy. We explored, if coherence properties of partial coherent light sources can be generated synthetically utilizing spectrally tunable lasers. The concept of the method is demonstrated by label-free quantitative phase imaging of living pancreatic tumor cells and utilizing an experimental configuration including a commercial microscope and a laser source with a broad tunable spectral range of more than 200 nm.
Galileo infrared imaging spectroscopy measurements at venus
Carlson, R.W.; Baines, K.H.; Encrenaz, Th.; Taylor, F.W.; Drossart, P.; Kamp, L.W.; Pollack, James B.; Lellouch, E.; Collard, A.D.; Calcutt, S.B.; Grinspoon, D.; Weissman, P.R.; Smythe, W.D.; Ocampo, A.C.; Danielson, G.E.; Fanale, F.P.; Johnson, T.V.; Kieffer, H.H.; Matson, D.L.; McCord, T.B.; Soderblom, L.A.
1991-01-01
During the 1990 Galileo Venus flyby, the Near Infrared Mapping Spectrometer investigated the night-side atmosphere of Venus in the spectral range 0.7 to 5.2 micrometers. Multispectral images at high spatial resolution indicate substantial cloud opacity variations in the lower cloud levels, centered at 50 kilometers altitude. Zonal and meridional winds were derived for this level and are consistent with motion of the upper branch of a Hadley cell. Northern and southern hemisphere clouds appear to be markedly different. Spectral profiles were used to derive lower atmosphere abundances of water vapor and other species.
Multiple directed graph large-class multi-spectral processor
NASA Technical Reports Server (NTRS)
Casasent, David; Liu, Shiaw-Dong; Yoneyama, Hideyuki
1988-01-01
Numerical analysis techniques for the interpretation of high-resolution imaging-spectrometer data are described and demonstrated. The method proposed involves the use of (1) a hierarchical classifier with a tree structure generated automatically by a Fisher linear-discriminant-function algorithm and (2) a novel multiple-directed-graph scheme which reduces the local maxima and the number of perturbations required. Results for a 500-class test problem involving simulated imaging-spectrometer data are presented in tables and graphs; 100-percent-correct classification is achieved with an improvement factor of 5.
NASA Technical Reports Server (NTRS)
Ando, K.
1982-01-01
A substantial technology base of solid state pushbroom sensors exists and is in the process of further evolution at both GSFC and JPL. Technologies being developed relate to short wave infrared (SWIR) detector arrays; HgCdTe hybrid detector arrays; InSb linear and area arrays; passive coolers; spectral beam splitters; the deposition of spectral filters on detector arrays; and the functional design of the shuttle/space platform imaging spectrometer (SIS) system. Spatial and spectral characteristics of field, aircraft and space multispectral sensors are summaried. The status, field of view, and resolution of foreign land observing systems are included.
Thi Kim Dung, Doan; Fukushima, Shoichiro; Furukawa, Taichi; Niioka, Hirohiko; Sannomiya, Takumi; Kobayashi, Kaori; Yukawa, Hiroshi; Baba, Yoshinobu; Hashimoto, Mamoru; Miyake, Jun
2016-01-01
Comprehensive imaging of a biological individual can be achieved by utilizing the variation in spatial resolution, the scale of cathodoluminescence (CL), and near-infrared (NIR), as favored by imaging probe Gd2O3 co-doped lanthanide nanophosphors (NPPs). A series of Gd2O3:Ln3+/Yb3+ (Ln3+: Tm3+, Ho3+, Er3+) NPPs with multispectral emission are prepared by the sol-gel method. The NPPs show a wide range of emissions spanning from the visible to the NIR region under 980 nm excitation. The dependence of the upconverting (UC)/downconverting (DC) emission intensity on the dopant ratio is investigated. The optimum ratios of dopants obtained for emissions in the NIR regions at 810 nm, 1200 nm, and 1530 nm are applied to produce nanoparticles by the homogeneous precipitation (HP) method. The nanoparticles produced from the HP method are used to investigate the dual NIR and CL imaging modalities. The results indicate the possibility of using Gd2O3 co-doped Ln3+/Yb3+ (Ln3+: Tm3+, Ho3+, Er3+) in correlation with NIR and CL imaging. The use of Gd2O3 promises an extension of the object dimension to the whole-body level by employing magnetic resonance imaging (MRI). PMID:28335291
Thi Kim Dung, Doan; Fukushima, Shoichiro; Furukawa, Taichi; Niioka, Hirohiko; Sannomiya, Takumi; Kobayashi, Kaori; Yukawa, Hiroshi; Baba, Yoshinobu; Hashimoto, Mamoru; Miyake, Jun
2016-09-06
Comprehensive imaging of a biological individual can be achieved by utilizing the variation in spatial resolution, the scale of cathodoluminescence (CL), and near-infrared (NIR), as favored by imaging probe Gd₂O₃ co-doped lanthanide nanophosphors (NPPs). A series of Gd₂O₃:Ln 3+ /Yb 3+ (Ln 3+ : Tm 3+ , Ho 3+ , Er 3+ ) NPPs with multispectral emission are prepared by the sol-gel method. The NPPs show a wide range of emissions spanning from the visible to the NIR region under 980 nm excitation. The dependence of the upconverting (UC)/downconverting (DC) emission intensity on the dopant ratio is investigated. The optimum ratios of dopants obtained for emissions in the NIR regions at 810 nm, 1200 nm, and 1530 nm are applied to produce nanoparticles by the homogeneous precipitation (HP) method. The nanoparticles produced from the HP method are used to investigate the dual NIR and CL imaging modalities. The results indicate the possibility of using Gd₂O₃ co-doped Ln 3+ /Yb 3+ (Ln 3+ : Tm 3+ , Ho 3+ , Er 3+ ) in correlation with NIR and CL imaging. The use of Gd₂O₃ promises an extension of the object dimension to the whole-body level by employing magnetic resonance imaging (MRI).
NASA Technical Reports Server (NTRS)
Oswald, Hayden; Molthan, Andrew L.
2011-01-01
Satellite remote sensing has gained widespread use in the field of operational meteorology. Although raw satellite imagery is useful, several techniques exist which can convey multiple types of data in a more efficient way. One of these techniques is multispectral compositing. The NASA Short-term Prediction Research and Transition (SPoRT) Center has developed two multispectral satellite imagery products which utilize data from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard NASA's Terra and Aqua satellites, based upon products currently generated and used by the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT). The nighttime microphysics product allows users to identify clouds occurring at different altitudes, but emphasizes fog and low cloud detection. This product improves upon current spectral difference and single channel infrared techniques. Each of the current products has its own set of advantages for nocturnal fog detection, but each also has limiting drawbacks which can hamper the analysis process. The multispectral product combines each current product with a third channel difference. Since the final image is enhanced with color, it simplifies the fog identification process. Analysis has shown that the nighttime microphysics imagery product represents a substantial improvement to conventional fog detection techniques, as well as provides a preview of future satellite capabilities to forecasters.
Histogram Curve Matching Approaches for Object-based Image Classification of Land Cover and Land Use
Toure, Sory I.; Stow, Douglas A.; Weeks, John R.; Kumar, Sunil
2013-01-01
The classification of image-objects is usually done using parametric statistical measures of central tendency and/or dispersion (e.g., mean or standard deviation). The objectives of this study were to analyze digital number histograms of image objects and evaluate classifications measures exploiting characteristic signatures of such histograms. Two histograms matching classifiers were evaluated and compared to the standard nearest neighbor to mean classifier. An ADS40 airborne multispectral image of San Diego, California was used for assessing the utility of curve matching classifiers in a geographic object-based image analysis (GEOBIA) approach. The classifications were performed with data sets having 0.5 m, 2.5 m, and 5 m spatial resolutions. Results show that histograms are reliable features for characterizing classes. Also, both histogram matching classifiers consistently performed better than the one based on the standard nearest neighbor to mean rule. The highest classification accuracies were produced with images having 2.5 m spatial resolution. PMID:24403648
Remote Sensing for Crop Water Management: From ET Modelling to Services for the End Users
Calera, Alfonso; Campos, Isidro; Osann, Anna; D’Urso, Guido; Menenti, Massimo
2017-01-01
The experiences gathered during the past 30 years support the operational use of irrigation scheduling based on frequent multi-spectral image data. Currently, the operational use of dense time series of multispectral imagery at high spatial resolution makes monitoring of crop biophysical parameters feasible, capturing crop water use across the growing season, with suitable temporal and spatial resolutions. These achievements, and the availability of accurate forecasting of meteorological data, allow for precise predictions of crop water requirements with unprecedented spatial resolution. This information is greatly appreciated by the end users, i.e., professional farmers or decision-makers, and can be provided in an easy-to-use manner and in near-real-time by using the improvements achieved in web-GIS methodologies (Geographic Information Systems based on web technologies). This paper reviews the most operational and explored methods based on optical remote sensing for the assessment of crop water requirements, identifying strengths and weaknesses and proposing alternatives to advance towards full operational application of this methodology. In addition, we provide a general overview of the tools, which facilitates co-creation and collaboration with stakeholders, paying special attention to these approaches based on web-GIS tools. PMID:28492515
Remote Sensing for Crop Water Management: From ET Modelling to Services for the End Users.
Calera, Alfonso; Campos, Isidro; Osann, Anna; D'Urso, Guido; Menenti, Massimo
2017-05-11
The experiences gathered during the past 30 years support the operational use of irrigation scheduling based on frequent multi-spectral image data. Currently, the operational use of dense time series of multispectral imagery at high spatial resolution makes monitoring of crop biophysical parameters feasible, capturing crop water use across the growing season, with suitable temporal and spatial resolutions. These achievements, and the availability of accurate forecasting of meteorological data, allow for precise predictions of crop water requirements with unprecedented spatial resolution. This information is greatly appreciated by the end users, i.e., professional farmers or decision-makers, and can be provided in an easy-to-use manner and in near-real-time by using the improvements achieved in web-GIS methodologies (Geographic Information Systems based on web technologies). This paper reviews the most operational and explored methods based on optical remote sensing for the assessment of crop water requirements, identifying strengths and weaknesses and proposing alternatives to advance towards full operational application of this methodology. In addition, we provide a general overview of the tools, which facilitates co-creation and collaboration with stakeholders, paying special attention to these approaches based on web-GIS tools.
Novel approach to multispectral image compression on the Internet
NASA Astrophysics Data System (ADS)
Zhu, Yanqiu; Jin, Jesse S.
2000-10-01
Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.
Detecting early stage pressure ulcer on dark skin using multispectral imager
NASA Astrophysics Data System (ADS)
Yi, Dingrong; Kong, Linghua; Sprigle, Stephen; Wang, Fengtao; Wang, Chao; Liu, Fuhan; Adibi, Ali; Tummala, Rao
2010-02-01
We are developing a handheld multispectral imaging device to non-invasively inspect stage I pressure ulcers in dark pigmented skins without the need of touching the patient's skin. This paper reports some preliminary test results of using a proof-of-concept prototype. It also talks about the innovation's impact to traditional multispectral imaging technologies and the fields that will potentially benefit from it.
Classification Accuracy Increase Using Multisensor Data Fusion
NASA Astrophysics Data System (ADS)
Makarau, A.; Palubinskas, G.; Reinartz, P.
2011-09-01
The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to other established methods illustrates the advantage in the classification accuracy for many classes such as buildings, low vegetation, sport objects, forest, roads, rail roads, etc.
Landsat imagery: a unique resource
Miller, H.; Sexton, N.; Koontz, L.
2011-01-01
Landsat satellites provide high-quality, multi-spectral imagery of the surface of the Earth. These moderate-resolution, remotely sensed images are not just pictures, but contain many layers of data collected at different points along the visible and invisible light spectrum. These data can be manipulated to reveal what the Earth’s surface looks like, including what types of vegetation are present or how a natural disaster has impacted an area (Fig. 1).
Segment fusion of ToF-SIMS images.
Milillo, Tammy M; Miller, Mary E; Fischione, Remo; Montes, Angelina; Gardella, Joseph A
2016-06-08
The imaging capabilities of time-of-flight secondary ion mass spectrometry (ToF-SIMS) have not been used to their full potential in the analysis of polymer and biological samples. Imaging has been limited by the size of the dataset and the chemical complexity of the sample being imaged. Pixel and segment based image fusion algorithms commonly used in remote sensing, ecology, geography, and geology provide a way to improve spatial resolution and classification of biological images. In this study, a sample of Arabidopsis thaliana was treated with silver nanoparticles and imaged with ToF-SIMS. These images provide insight into the uptake mechanism for the silver nanoparticles into the plant tissue, giving new understanding to the mechanism of uptake of heavy metals in the environment. The Munechika algorithm was programmed in-house and applied to achieve pixel based fusion, which improved the spatial resolution of the image obtained. Multispectral and quadtree segment or region based fusion algorithms were performed using ecognition software, a commercially available remote sensing software suite, and used to classify the images. The Munechika fusion improved the spatial resolution for the images containing silver nanoparticles, while the segment fusion allowed classification and fusion based on the tissue types in the sample, suggesting potential pathways for the uptake of the silver nanoparticles.
Multispectral Cloud Retrievals from MODIS on Terra and Aqua
NASA Technical Reports Server (NTRS)
King, Michael D.; Platnick, Steven; Ackerman, Steven A.; Menzel, W. Paul; Gray, Mark A.; Moody, Eric G.
2002-01-01
The Moderate Resolution Imaging Spectroradiometer (MODIS) was developed by NASA and launched onboard the Terra spacecraft on December 18, 1999 and the Aqua spacecraft on April 26, 2002. MODIS scans a swath width sufficient to provide nearly complete global coverage every two days from each polar-orbiting, sun-synchronous, platform at an altitude of 705 km, and provides images in 36 spectral bands between 0.415 and 14.235 microns with spatial resolutions of 250 m (2 bands), 500 m (5 bands) and 1000 m (29 bands). In this paper we will describe the various methods being used for the remote sensing of cloud properties using MODIS data, focusing primarily on the MODIS cloud mask used to distinguish clouds, clear sky, heavy aerosol, and shadows on the ground, and on the remote sensing of cloud optical properties, especially cloud optical thickness and effective radius of water drops and ice crystals. Additional properties of clouds derived from multispectral thermal infrared measurements, especially cloud top pressure and emissivity, will also be described. Results will be presented of MODIS cloud properties both over the land and over the ocean, showing the consistency in cloud retrievals over various ecosystems used in the retrievals. The implications of this new observing system on global analysis of the Earth's environment will be discussed.
New Multispectral Cloud Retrievals from MODIS
NASA Technical Reports Server (NTRS)
King, Michael D.; Platnick, Steven; Tsay, Si-Chee; Ackerman, Steven A.; Menzel, W. Paul; Gray, Mark A.; Moody, Eric G.; Li, Jason Y.; Arnold, G. Thomas
2001-01-01
The Moderate Resolution Imaging Spectroradiometer (MODIS) was developed by NASA and launched onboard the Terra spacecraft on December 18, 1999. It achieved its final orbit and began Earth observations on February 24, 2000. MODIS scans a swath width sufficient to provide nearly complete global coverage every two days from a polar-orbiting, sun- synchronous, platform at an altitude of 705 km, and provides images in 36 spectral bands between 0.415 and 14.235 microns with spatial resolutions of 250 m (two bands), 500 m (five bands) and 1000 m (29 bands). In this paper we will describe the various methods being used for the remote sensing of cloud properties using MODIS data, focusing primarily on the MODIS cloud mask used to distinguish clouds, clear sky, heavy aerosol, and shadows on the ground, and on the remote sensing of cloud optical properties, especially cloud optical thickness and effective radius of water drops and ice crystals. Additional properties of clouds derived from multispectral thermal infrared measurements, especially cloud top pressure and emissivity, will also be described. Results will be presented of MODIS cloud properties both over the land and over the ocean, showing the consistency in cloud retrievals over various ecosystems used in the retrievals. The implications of this new observing system on global analysis of the Earth's environment will be discussed.
NASA Astrophysics Data System (ADS)
Hunger, Sebastian; Karrasch, Pierre; Wessollek, Christine
2016-10-01
The European Water Framework Directive (Directive 2000/60/EC) is a mandatory agreement that guides the member states of the European Union in the field of water policy to fulfill the requirements for reaching the aim of the good ecological status of water bodies. In the last years several workflows and methods were developed to determine and evaluate the characteristics and the status of the water bodies. Due to their area measurements remote sensing methods are a promising approach to constitute a substantial additional value. With increasing availability of optical and radar remote sensing data the development of new methods to extract information from both types of remote sensing data is still in progress. Since most limitations of these data sets do not agree the fusion of both data sets to gain data with higher spectral resolution features the potential to obtain additional information in contrast to the separate processing of the data. Based thereupon this study shall research the potential of multispectral and radar remote sensing data and the potential of their fusion for the assessment of the parameters of water body structure. Due to the medium spatial resolution of the freely available multispectral Sentinel-2 data sets especially the surroundings of the water bodies and their land use are part of this study. SAR data is provided by the Sentinel-1 satellite. Different image fusion methods are tested and the combined products of both data sets are evaluated afterwards. The evaluation of the single data sets and the fused data sets is performed by means of a maximum-likelihood classification and several statistical measurements. The results indicate that the combined use of different remote sensing data sets can have an added value.
Design and fabrication of multispectral optics using expanded glass map
NASA Astrophysics Data System (ADS)
Bayya, Shyam; Gibson, Daniel; Nguyen, Vinh; Sanghera, Jasbinder; Kotov, Mikhail; Drake, Gryphon; Deegan, John; Lindberg, George
2015-06-01
As the desire to have compact multispectral imagers in various DoD platforms is growing, the dearth of multispectral optics is widely felt. With the limited number of material choices for optics, these multispectral imagers are often very bulky and impractical on several weight sensitive platforms. To address this issue, NRL has developed a large set of unique infrared glasses that transmit from 0.9 to > 14 μm in wavelength and expand the glass map for multispectral optics with refractive indices from 2.38 to 3.17. They show a large spread in dispersion (Abbe number) and offer some unique solutions for multispectral optics designs. The new NRL glasses can be easily molded and also fused together to make bonded doublets. A Zemax compatible glass file has been created and is available upon request. In this paper we present some designs, optics fabrication and imaging, all using NRL materials.
A multispectral photon-counting double random phase encoding scheme for image authentication.
Yi, Faliu; Moon, Inkyu; Lee, Yeon H
2014-05-20
In this paper, we propose a new method for color image-based authentication that combines multispectral photon-counting imaging (MPCI) and double random phase encoding (DRPE) schemes. The sparsely distributed information from MPCI and the stationary white noise signal from DRPE make intruder attacks difficult. In this authentication method, the original multispectral RGB color image is down-sampled into a Bayer image. The three types of color samples (red, green and blue color) in the Bayer image are encrypted with DRPE and the amplitude part of the resulting image is photon counted. The corresponding phase information that has nonzero amplitude after photon counting is then kept for decryption. Experimental results show that the retrieved images from the proposed method do not visually resemble their original counterparts. Nevertheless, the original color image can be efficiently verified with statistical nonlinear correlations. Our experimental results also show that different interpolation algorithms applied to Bayer images result in different verification effects for multispectral RGB color images.
The Beagle 2 Stereo Camera System: Scientific Objectives and Design Characteristics
NASA Astrophysics Data System (ADS)
Griffiths, A.; Coates, A.; Josset, J.; Paar, G.; Sims, M.
2003-04-01
The Stereo Camera System (SCS) will provide wide-angle (48 degree) multi-spectral stereo imaging of the Beagle 2 landing site in Isidis Planitia with an angular resolution of 0.75 milliradians. Based on the SpaceX Modular Micro-Imager, the SCS is composed of twin cameras (with 1024 by 1024 pixel frame transfer CCD) and twin filter wheel units (with a combined total of 24 filters). The primary mission objective is to construct a digital elevation model of the area in reach of the lander’s robot arm. The SCS specifications and following baseline studies are described: Panoramic RGB colour imaging of the landing site and panoramic multi-spectral imaging at 12 distinct wavelengths to study the mineralogy of landing site. Solar observations to measure water vapour absorption and the atmospheric dust optical density. Also envisaged are multi-spectral observations of Phobos &Deimos (observations of the moons relative to background stars will be used to determine the lander’s location and orientation relative to the Martian surface), monitoring of the landing site to detect temporal changes, observation of the actions and effects of the other PAW experiments (including rock texture studies with a close-up-lens) and collaborative observations with the Mars Express orbiter instrument teams. Due to be launched in May of this year, the total system mass is 360 g, the required volume envelope is 747 cm^3 and the average power consumption is 1.8 W. A 10Mbit/s RS422 bus connects each camera to the lander common electronics.
Unmixing techniques for better segmentation of urban zones, roads, and open pit mines
NASA Astrophysics Data System (ADS)
Nikolov, Hristo; Borisova, Denitsa; Petkov, Doyno
2010-10-01
In this paper the linear unmixing method has been applied in classification of manmade objects, namely urbanized zones, roads etc. The idea is to exploit to larger extent the possibilities offered by multispectral imagers having mid spatial resolution in this case TM/ETM+ instruments. In this research unmixing is used to find consistent regression dependencies between multispectral data and those gathered in-situ and airborne-based sensors. The correct identification of the mixed pixels is key element for the subsequent segmentation forming the shape of the artificial feature is determined much more reliable. This especially holds true for objects with relatively narrow structure for example two-lane roads for which the spatial resolution is larger that the object itself. We have combined ground spectrometry of asphalt, Landsat images of RoI, and in-situ measured asphalt in order to determine the narrow roads. The reflectance of paving stones made from granite is highest compared to another ones which is true for open and stone pits. The potential for mapping is not limited to the mid-spatial Landsat data, but also may be used if the data has higher spatial resolution (as fine as 0.5 m). In this research the spectral and directional reflection properties of asphalt and concrete surfaces compared to those of paving stone made from different rocks have been measured. The in-situ measurements, which plays key role have been obtained using the Thematically Oriented Multichannel Spectrometer (TOMS) - designed in STIL-BAS.
NASA Astrophysics Data System (ADS)
Moonon, Altan-Ulzii; Hu, Jianwen; Li, Shutao
2015-12-01
The remote sensing image fusion is an important preprocessing technique in remote sensing image processing. In this paper, a remote sensing image fusion method based on the nonsubsampled shearlet transform (NSST) with sparse representation (SR) is proposed. Firstly, the low resolution multispectral (MS) image is upsampled and color space is transformed from Red-Green-Blue (RGB) to Intensity-Hue-Saturation (IHS). Then, the high resolution panchromatic (PAN) image and intensity component of MS image are decomposed by NSST to high and low frequency coefficients. The low frequency coefficients of PAN and the intensity component are fused by the SR with the learned dictionary. The high frequency coefficients of intensity component and PAN image are fused by local energy based fusion rule. Finally, the fused result is obtained by performing inverse NSST and inverse IHS transform. The experimental results on IKONOS and QuickBird satellites demonstrate that the proposed method provides better spectral quality and superior spatial information in the fused image than other remote sensing image fusion methods both in visual effect and object evaluation.
CRISM's Global Mapping of Mars, Part 3
NASA Technical Reports Server (NTRS)
2007-01-01
After a year in Mars orbit, CRISM has taken enough images to allow the team to release the first parts of a global spectral map of Mars to the Planetary Data System (PDS), NASA's digital library of planetary data. CRISM's global mapping is called the 'multispectral survey.' The team uses the word 'survey' because a reason for gathering this data set is to search for new sites for targeted observations, high-resolution views of the surface at 18 meters per pixel in 544 colors. Another reason for the multispectral survey is to provide contextual information. Targeted observations have such a large data volume (about 200 megabytes apiece) that only about 1% of Mars can be imaged at CRISM's highest resolution. The multispectral survey is a lower data volume type of observation that fills in the gaps between targeted observations, allowing scientists to better understand their geologic context. The global map is built from tens of thousands of image strips each about 10 kilometers (6.2 miles) wide and thousands of kilometers long. During the multispectral survey, CRISM returns data from only 72 carefully selected wavelengths that cover absorptions indicative of the mineral groups that CRISM is looking for on Mars. Data volume is further decreased by binning image pixels inside the instrument to a scale of about 200 meters (660 feet) per pixel. The total reduction in data volume per square kilometer is a factor of 700, making the multispectral survey manageable to acquire and transmit to Earth. Once on the ground, the strips of data are mosaicked into maps. The multispectral survey is too large to show the whole planet in a single map, so the map is divided into 1,964 'tiles,' each about 300 kilometers (186 miles) across. There are three versions of each tile, processed to progressively greater levels to strip away the obscuring effects of the dusty atmosphere and to highlight mineral variations in surface materials. This is the third and most processed version of tile 750, showing a part of Mars called Tyrrhena Terra in the ancient, heavily cratered highlands. The colored strips are CRISM multispectral survey data acquired over several months, in which each pixel began as calibrated 72-color spectrum of Mars. An experimental correction for illumination and atmospheric effects was applied to the data, to show how Mars' surface would appear if each strip was imaged with the same illumination and without an atmosphere. Then, the spectrum for each pixel was transformed into a set of 'summary parameters,' which indicate absorptions showing the presence of different minerals. Detections of the igneous, iron-bearing minerals olivine and pyroxene are shown in the red and blue image planes, respectively. Clay-like minerals called phyllosilicates, which formed when liquid water altered the igneous rocks, are shown in the green image plane. The gray areas between the strips are from an earlier mosaic of the planet taken by the Thermal Emission Imaging System (THEMIS) instrument on Mars Odyssey, and are included for context. Note that most areas imaged by CRISM contain pyroxene, and that olivine-containing rocks are concentrated on smooth deposits that fill some crater floors and the low areas between craters. Phyllosilicate-containing rocks are concentrated in and around small craters, such as the one at 13 degrees south latitude, 97 degrees east longitude. Their concentration in crater materials suggests that they were excavated when the craters formed, from a layer that was buried by the younger, less altered, olivine- and pyroxene-containing rocks. CRISM is one of six science instruments on NASA's Mars Reconnaissance Orbiter. Led by The Johns Hopkins University Applied Physics Laboratory, Laurel, Md., the CRISM team includes expertise from universities, government agencies and small businesses in the United States and abroad. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter and the Mars Science Laboratory for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, built the orbiter.Analysis of multispectral and hyperspectral longwave infrared (LWIR) data for geologic mapping
NASA Astrophysics Data System (ADS)
Kruse, Fred A.; McDowell, Meryl
2015-05-01
Multispectral MODIS/ASTER Airborne Simulator (MASTER) data and Hyperspectral Thermal Emission Spectrometer (HyTES) data covering the 8 - 12 μm spectral range (longwave infrared or LWIR) were analyzed for an area near Mountain Pass, California. Decorrelation stretched images were initially used to highlight spectral differences between geologic materials. Both datasets were atmospherically corrected using the ISAC method, and the Normalized Emissivity approach was used to separate temperature and emissivity. The MASTER data had 10 LWIR spectral bands and approximately 35-meter spatial resolution and covered a larger area than the HyTES data, which were collected with 256 narrow (approximately 17nm-wide) spectral bands at approximately 2.3-meter spatial resolution. Spectra for key spatially-coherent, spectrally-determined geologic units for overlap areas were overlain and visually compared to determine similarities and differences. Endmember spectra were extracted from both datasets using n-dimensional scatterplotting and compared to emissivity spectral libraries for identification. Endmember distributions and abundances were then mapped using Mixture-Tuned Matched Filtering (MTMF), a partial unmixing approach. Multispectral results demonstrate separation of silica-rich vs non-silicate materials, with distinct mapping of carbonate areas and general correspondence to the regional geology. Hyperspectral results illustrate refined mapping of silicates with distinction between similar units based on the position, character, and shape of high resolution emission minima near 9 μm. Calcite and dolomite were separated, identified, and mapped using HyTES based on a shift of the main carbonate emissivity minimum from approximately 11.3 to 11.2 μm respectively. Both datasets demonstrate the utility of LWIR spectral remote sensing for geologic mapping.
NASA Astrophysics Data System (ADS)
Smith, W.; Weisz, E.; McNabb, J. M. C.
2017-12-01
A technique is described which enables the combination of high vertical resolution (1 to 2-km) JPSS hyper-spectral soundings (i.e., from AIRS, CrIS, and IASI) with high horizontal (2-km) and temporal (15-min) resolution GOES multi-spectral imagery (i.e., provided by ABI) to produce low latency sounding products with the highest possible spatial and temporal resolution afforded by the instruments.
Landsat image data quality studies
NASA Technical Reports Server (NTRS)
Schueler, C. F.; Salomonson, V. V.
1985-01-01
Preliminary results of the Landsat-4 Image Data Quality Analysis (LIDQA) program to characterize the data obtained using the Thematic Mapper (TM) instrument on board the Landsat-4 and Landsat-5 satellites are reported. TM design specifications were compared to the obtained data with respect to four criteria, including spatial resolution; geometric fidelity; information content; and image relativity to Multispectral Scanner (MSS) data. The overall performance of the TM was rated excellent despite minor instabilities and radiometric anomalies in the data. Spatial performance of the TM exceeded design specifications in terms of both image sharpness and geometric accuracy, and the image utility of the TM data was at least twice as high as MSS data. The separability of alfalfa and sugar beet fields in a TM image is demonstrated.
Torres-Sánchez, Jorge; López-Granados, Francisca; De Castro, Ana Isabel; Peña-Barragán, José Manuel
2013-01-01
A new aerial platform has risen recently for image acquisition, the Unmanned Aerial Vehicle (UAV). This article describes the technical specifications and configuration of a UAV used to capture remote images for early season site- specific weed management (ESSWM). Image spatial and spectral properties required for weed seedling discrimination were also evaluated. Two different sensors, a still visible camera and a six-band multispectral camera, and three flight altitudes (30, 60 and 100 m) were tested over a naturally infested sunflower field. The main phases of the UAV workflow were the following: 1) mission planning, 2) UAV flight and image acquisition, and 3) image pre-processing. Three different aspects were needed to plan the route: flight area, camera specifications and UAV tasks. The pre-processing phase included the correct alignment of the six bands of the multispectral imagery and the orthorectification and mosaicking of the individual images captured in each flight. The image pixel size, area covered by each image and flight timing were very sensitive to flight altitude. At a lower altitude, the UAV captured images of finer spatial resolution, although the number of images needed to cover the whole field may be a limiting factor due to the energy required for a greater flight length and computational requirements for the further mosaicking process. Spectral differences between weeds, crop and bare soil were significant in the vegetation indices studied (Excess Green Index, Normalised Green-Red Difference Index and Normalised Difference Vegetation Index), mainly at a 30 m altitude. However, greater spectral separability was obtained between vegetation and bare soil with the index NDVI. These results suggest that an agreement among spectral and spatial resolutions is needed to optimise the flight mission according to every agronomical objective as affected by the size of the smaller object to be discriminated (weed plants or weed patches).
Torres-Sánchez, Jorge; López-Granados, Francisca; De Castro, Ana Isabel; Peña-Barragán, José Manuel
2013-01-01
A new aerial platform has risen recently for image acquisition, the Unmanned Aerial Vehicle (UAV). This article describes the technical specifications and configuration of a UAV used to capture remote images for early season site- specific weed management (ESSWM). Image spatial and spectral properties required for weed seedling discrimination were also evaluated. Two different sensors, a still visible camera and a six-band multispectral camera, and three flight altitudes (30, 60 and 100 m) were tested over a naturally infested sunflower field. The main phases of the UAV workflow were the following: 1) mission planning, 2) UAV flight and image acquisition, and 3) image pre-processing. Three different aspects were needed to plan the route: flight area, camera specifications and UAV tasks. The pre-processing phase included the correct alignment of the six bands of the multispectral imagery and the orthorectification and mosaicking of the individual images captured in each flight. The image pixel size, area covered by each image and flight timing were very sensitive to flight altitude. At a lower altitude, the UAV captured images of finer spatial resolution, although the number of images needed to cover the whole field may be a limiting factor due to the energy required for a greater flight length and computational requirements for the further mosaicking process. Spectral differences between weeds, crop and bare soil were significant in the vegetation indices studied (Excess Green Index, Normalised Green-Red Difference Index and Normalised Difference Vegetation Index), mainly at a 30 m altitude. However, greater spectral separability was obtained between vegetation and bare soil with the index NDVI. These results suggest that an agreement among spectral and spatial resolutions is needed to optimise the flight mission according to every agronomical objective as affected by the size of the smaller object to be discriminated (weed plants or weed patches). PMID:23483997
Theory on data processing and instrumentation. [remote sensing
NASA Technical Reports Server (NTRS)
Billingsley, F. C.
1978-01-01
A selection of NASA Earth observations programs are reviewed, emphasizing hardware capabilities. Sampling theory, noise and detection considerations, and image evaluation are discussed for remote sensor imagery. Vision and perception are considered, leading to numerical image processing. The use of multispectral scanners and of multispectral data processing systems, including digital image processing, is depicted. Multispectral sensing and analysis in application with land use and geographical data systems are also covered.
Muldoon, Timothy J; Polydorides, Alexandros D; Maru, Dipen M; Harpaz, Noam; Harris, Michael T; Hofstettor, Wayne; Hiotis, Spiros P; Kim, Sanghyun A; Ky, Alex J; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca
2012-01-01
Background Confocal endomicroscopy has revolutionized endoscopy by offering sub-cellular images of gastrointestinal epithelium; however, field-of-view is limited. There is a need for multi-scale endoscopy platforms that use widefield imaging to better direct placement of high-resolution probes. Design Feasibility Study Objective This study evaluates the feasibility of a single agent, proflavine hemisulfate, as a contrast medium during both widefield and high resolution imaging to characterize morphologic changes associated with a variety of gastrointestinal conditions. Setting U.T. M.D. Anderson Cancer Center (Houston, TX) and Mount Sinai Medical Center (New York, NY) Patients, Interventions, and Main Outcome Measurements Surgical specimens were obtained from 15 patients undergoing esophagectomy/colectomy. Proflavine, a vital fluorescent dye, was applied topically. Specimens were imaged with a widefield multispectral microscope and a high-resolution microendoscope. Images were compared to histopathology. Results Widefield-fluorescence imaging enhanced visualization of morphology, including the presence and spatial distribution of glands, glandular distortion, atrophy and crowding. High-resolution imaging of widefield-abnormal areas revealed that neoplastic progression corresponded to glandular heterogeneity and nuclear crowding in dysplasia, with glandular effacement in carcinoma. These widefield and high-resolution image features correlated well with histopathology. Limitations This imaging approach must be validated in vivo with a larger sample size. Conclusions Multi-scale proflavine-enhanced fluorescence imaging can delineate epithelial changes in a variety of gastrointestinal conditions. Distorted glandular features seen with widefield imaging could serve as a critical ‘bridge’ to high-resolution probe placement. An endoscopic platform combining the two modalities with a single vital-dye may facilitate point-of-care decision-making by providing real-time, in vivo diagnoses. PMID:22301343
Vital-dye enhanced fluorescence imaging of GI mucosa: metaplasia, neoplasia, inflammation.
Thekkek, Nadhi; Muldoon, Timothy; Polydorides, Alexandros D; Maru, Dipen M; Harpaz, Noam; Harris, Michael T; Hofstettor, Wayne; Hiotis, Spiros P; Kim, Sanghyun A; Ky, Alex Jenny; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca
2012-04-01
Confocal endomicroscopy has revolutionized endoscopy by offering subcellular images of the GI epithelium; however, the field of view is limited. Multiscale endoscopy platforms that use widefield imaging are needed to better direct the placement of high-resolution probes. Feasibility study. This study evaluated the feasibility of a single agent, proflavine hemisulfate, as a contrast medium during both widefield and high-resolution imaging to characterize the morphologic changes associated with a variety of GI conditions. The University of Texas MD Anderson Cancer Center, Houston, Texas, and Mount Sinai Medical Center, New York, New York. PATIENTS, INTERVENTIONS, AND MAIN OUTCOME MEASUREMENTS: Resected specimens were obtained from 15 patients undergoing EMR, esophagectomy, or colectomy. Proflavine hemisulfate, a vital fluorescent dye, was applied topically. The specimens were imaged with a widefield multispectral microscope and a high-resolution microendoscope. The images were compared with histopathologic examination. Widefield fluorescence imaging enhanced visualization of morphology, including the presence and spatial distribution of glands, glandular distortion, atrophy, and crowding. High-resolution imaging of widefield abnormal areas revealed that neoplastic progression corresponded to glandular heterogeneity and nuclear crowding in dysplasia, with glandular effacement in carcinoma. These widefield and high-resolution image features correlated well with the histopathologic features. This imaging approach must be validated in vivo with a larger sample size. Multiscale proflavine-enhanced fluorescence imaging can delineate epithelial changes in a variety of GI conditions. Distorted glandular features seen with widefield imaging could serve as a critical bridge to high-resolution probe placement. An endoscopic platform combining the two modalities with a single vital dye may facilitate point-of-care decision making by providing real-time, in vivo diagnoses. Copyright © 2012 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.
Multispectral Image Compression Based on DSC Combined with CCSDS-IDC
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741
Multispectral image compression based on DSC combined with CCSDS-IDC.
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.
Minimally invasive multimode optical fiber microendoscope for deep brain fluorescence imaging
Ohayon, Shay; Caravaca-Aguirre, Antonio; Piestun, Rafael; DiCarlo, James J.
2018-01-01
A major open challenge in neuroscience is the ability to measure and perturb neural activity in vivo from well defined neural sub-populations at cellular resolution anywhere in the brain. However, limitations posed by scattering and absorption prohibit non-invasive multi-photon approaches for deep (>2mm) structures, while gradient refractive index (GRIN) endoscopes are relatively thick and can cause significant damage upon insertion. Here, we present a novel micro-endoscope design to image neural activity at arbitrary depths via an ultra-thin multi-mode optical fiber (MMF) probe that has 5–10X thinner diameter than commercially available micro-endoscopes. We demonstrate micron-scale resolution, multi-spectral and volumetric imaging. In contrast to previous approaches, we show that this method has an improved acquisition speed that is sufficient to capture rapid neuronal dynamics in-vivo in rodents expressing a genetically encoded calcium indicator (GCaMP). Our results emphasize the potential of this technology in neuroscience applications and open up possibilities for cellular resolution imaging in previously unreachable brain regions. PMID:29675297
Hu, J H; Wang, Y; Cahill, P T
1997-01-01
This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.
Galileo multispectral imaging of Earth.
Geissler, P; Thompson, W R; Greenberg, R; Moersch, J; McEwen, A; Sagan, C
1995-08-25
Nearly 6000 multispectral images of Earth were acquired by the Galileo spacecraft during its two flybys. The Galileo images offer a unique perspective on our home planet through the spectral capability made possible by four narrowband near-infrared filters, intended for observations of methane in Jupiter's atmosphere, which are not incorporated in any of the currently operating Earth orbital remote sensing systems. Spectral variations due to mineralogy, vegetative cover, and condensed water are effectively mapped by the visible and near-infrared multispectral imagery, showing a wide variety of biological, meteorological, and geological phenomena. Global tectonic and volcanic processes are clearly illustrated by these images, providing a useful basis for comparative planetary geology. Differences between plant species are detected through the narrowband IR filters on Galileo, allowing regional measurements of variation in the "red edge" of chlorophyll and the depth of the 1-micrometer water band, which is diagnostic of leaf moisture content. Although evidence of life is widespread in the Galileo data set, only a single image (at approximately 2 km/pixel) shows geometrization plausibly attributable to our technical civilization. Water vapor can be uniquely imaged in the Galileo 0.73-micrometer band, permitting spectral discrimination of moist and dry clouds with otherwise similar albedo. Surface snow and ice can be readily distinguished from cloud cover by narrowband imaging within the sensitivity range of Galileo's silicon CCD camera. Ice grain size variations can be mapped using the weak H2O absorption at 1 micrometer, a technique which may find important applications in the exploration of the moons of Jupiter. The Galileo images have the potential to make unique contributions to Earth science in the areas of geological, meteorological and biological remote sensing, due to the inclusion of previously untried narrowband IR filters. The vast scale and near global coverage of the Galileo data set complements the higher-resolution data from Earth orbiting systems and may provide a valuable reference point for future studies of global change.
Skin condition measurement by using multispectral imaging system (Conference Presentation)
NASA Astrophysics Data System (ADS)
Jung, Geunho; Kim, Sungchul; Kim, Jae Gwan
2017-02-01
There are a number of commercially available low level light therapy (LLLT) devices in a market, and face whitening or wrinkle reduction is one of targets in LLLT. The facial improvement could be known simply by visual observation of face, but it cannot provide either quantitative data or recognize a subtle change. Clinical diagnostic instruments such as mexameter can provide a quantitative data, but it costs too high for home users. Therefore, we designed a low cost multi-spectral imaging device by adding additional LEDs (470nm, 640nm, white LED, 905nm) to a commercial USB microscope which has two LEDs (395nm, 940nm) as light sources. Among various LLLT skin treatments, we focused on getting melanin and wrinkle information. For melanin index measurements, multi-spectral images of nevus were acquired and melanin index values from color image (conventional method) and from multi-spectral images were compared. The results showed that multi-spectral analysis of melanin index can visualize nevus with a different depth and concentration. A cross section of wrinkle on skin resembles a wedge which can be a source of high frequency components when the skin image is Fourier transformed into a spatial frequency domain map. In that case, the entropy value of the spatial frequency map can represent the frequency distribution which is related with the amount and thickness of wrinkle. Entropy values from multi-spectral images can potentially separate the percentage of thin and shallow wrinkle from thick and deep wrinkle. From the results, we found that this low cost multi-spectral imaging system could be beneficial for home users of LLLT by providing the treatment efficacy in a quantitative way.
NASA Astrophysics Data System (ADS)
Zemp, Roger J.; Paproski, Robert J.
2017-03-01
For emerging tissue-engineering applications, transplants, and cell-based therapies it is important to assess cell viability and function in vivo in deep tissues. Bioluminescence and fluorescence methods are poorly suited to deep monitoring applications with high resolution and require genetically-engineered reporters which are not always feasible. We report on a method for imaging cell viability using deep, high-resolution photoacoustic imaging. We use an exogenous dye, Resazurin, itself weakly fluorescent until it is reduced from blue to a pink color with bright red fluorescence. Upon cell death fluorescence is lost and an absorption shift is observed. The irreversible reaction of resazurin to resorufin is proportional to aerobic respiration. We detect colorimetric absorption shifts using multispectral photoacoustic imaging and quantify the fraction of viable cells. SKOV-3 cells with and without ±80oC heat treatment were imaged after Resazurin treatment. High 575nm:620nm ratiometric absorption and photoacoustic signals in viable cells were observed with a much lower ratio in low-viability populations.
a Comprehensive Review of Pansharpening Algorithms for GÖKTÜRK-2 Satellite Images
NASA Astrophysics Data System (ADS)
Kahraman, S.; Ertürk, A.
2017-11-01
In this paper, a comprehensive review and performance evaluation of pansharpening algorithms for GÖKTÜRK-2 images is presented. GÖKTÜRK-2 is the first high resolution remote sensing satellite of Turkey which was designed and built in Turkey, by The Ministry of Defence, TUBITAK-UZAY and Turkish Aerospace Industry (TUSAŞ) collectively. GÖKTÜRK-2 was launched at 18th. December 2012 in Jinguan, China and provides 2.5 meter panchromatic (PAN) and 5 meter multispectral (MS) spatial resolution satellite images. In this study, a large number of pansharpening algorithms are implemented and evaluated for performance on multiple GÖKTÜRK-2 satellite images. Quality assessments are conducted both qualitatively through visual results and quantitatively using Root Mean Square Error (RMSE), Correlation Coefficient (CC), Spectral Angle Mapper (SAM), Erreur Relative Globale Adimensionnelle de Synthése (ERGAS), Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM) and Universal Image Quality Index (UIQI).
NASA Astrophysics Data System (ADS)
Cho, Byoung-Kwan; Kim, Moon S.; Chen, Yud-Ren
2005-11-01
Emerging concerns about safety and security in current mass production of food products necessitate rapid and reliable inspection for contaminant-free products. Diluted fecal residues on poultry processing plant equipment surface, not easily discernable from water by human eye, are contamination sources for poultry carcasses. Development of sensitive detection methods for fecal residues is essential to ensure safe production of poultry carcasses. Hyperspectral imaging techniques have shown good potential for detecting of the presence of fecal and other biological substances on food and processing equipment surfaces. In this study, use of high spatial resolution hyperspectral reflectance and fluorescence imaging (with UV-A excitation) is presented as a tool for selecting a few multispectral bands to detect diluted fecal and ingesta residues on materials used for manufacturing processing equipment. Reflectance and fluorescence imaging methods were compared for potential detection of a range of diluted fecal residues on the surfaces of processing plant equipment. Results showed that low concentrations of poultry feces and ingesta, diluted up to 1:100 by weight with double distilled water, could be detected using hyperspectral fluorescence images with an accuracy of 97.2%. Spectral bands determined in this study could be used for developing a real-time multispectral inspection device for detection of harmful organic residues on processing plant equipment.
Ultrahigh resolution photographic films for X-ray/EUV/FUV astronomy
NASA Technical Reports Server (NTRS)
Hoover, Richard B.; Walker, Arthur B. C., Jr.; Deforest, Craig E.; Watts, Richard; Tarrio, Charles
1993-01-01
The quest for ultrahigh resolution full-disk images of the sun at soft X-ray/EUV/FUV wavelengths has increased the demand for photographic films with broad spectral sensitivity, high spatial resolution, and wide dynamic range. These requirements were made more stringent by the recent development of multilayer telescopes and coronagraphs capable of operating at normal incidence at soft X-ray/EUV wavelengths. Photographic films are the only detectors now available with the information storage capacity and dynamic range such as is required for recording images of the solar disk and corona simultaneously with sub arc second spatial resolution. During the Stanford/MSFC/LLNL Rocket X-Ray Spectroheliograph and Multi-Spectral Solar Telescope Array (MSSTA) programs, we utilized photographic films to obtain high resolution full-disk images of the sun at selected soft X-ray/EUV/FUV wavelengths. In order to calibrate our instrumentation for quantitative analysis of our solar data and to select the best emulsions and processing conditions for the MSSTA reflight, we recently tested several photographic films. These studies were carried out at the NIST SURF II synchrotron and the Stanford Synchrotron Radiation Laboratory. In this paper, we provide the results of those investigations.
Spatial, Temporal and Spectral Satellite Image Fusion via Sparse Representation
NASA Astrophysics Data System (ADS)
Song, Huihui
Remote sensing provides good measurements for monitoring and further analyzing the climate change, dynamics of ecosystem, and human activities in global or regional scales. Over the past two decades, the number of launched satellite sensors has been increasing with the development of aerospace technologies and the growing requirements on remote sensing data in a vast amount of application fields. However, a key technological challenge confronting these sensors is that they tradeoff between spatial resolution and other properties, including temporal resolution, spectral resolution, swath width, etc., due to the limitations of hardware technology and budget constraints. To increase the spatial resolution of data with other good properties, one possible cost-effective solution is to explore data integration methods that can fuse multi-resolution data from multiple sensors, thereby enhancing the application capabilities of available remote sensing data. In this thesis, we propose to fuse the spatial resolution with temporal resolution and spectral resolution, respectively, based on sparse representation theory. Taking the study case of Landsat ETM+ (with spatial resolution of 30m and temporal resolution of 16 days) and MODIS (with spatial resolution of 250m ~ 1km and daily temporal resolution) reflectance, we propose two spatial-temporal fusion methods to combine the fine spatial information of Landsat image and the daily temporal resolution of MODIS image. Motivated by that the images from these two sensors are comparable on corresponding bands, we propose to link their spatial information on available Landsat- MODIS image pair (captured on prior date) and then predict the Landsat image from the MODIS counterpart on prediction date. To well-learn the spatial details from the prior images, we use a redundant dictionary to extract the basic representation atoms for both Landsat and MODIS images based on sparse representation. Under the scenario of two prior Landsat-MODIS image pairs, we build the corresponding relationship between the difference images of MODIS and ETM+ by training a low- and high-resolution dictionary pair from the given prior image pairs. In the second scenario, i.e., only one Landsat- MODIS image pair being available, we directly correlate MODIS and ETM+ data through an image degradation model. Then, the fusion stage is achieved by super-resolving the MODIS image combining the high-pass modulation in a two-layer fusion framework. Remarkably, the proposed spatial-temporal fusion methods form a unified framework for blending remote sensing images with phenology change or land-cover-type change. Based on the proposed spatial-temporal fusion models, we propose to monitor the land use/land cover changes in Shenzhen, China. As a fast-growing city, Shenzhen faces the problem of detecting the rapid changes for both rational city planning and sustainable development. However, the cloudy and rainy weather in region Shenzhen located makes the capturing circle of high-quality satellite images longer than their normal revisit periods. Spatial-temporal fusion methods are capable to tackle this problem by improving the spatial resolution of images with coarse spatial resolution but frequent temporal coverage, thereby making the detection of rapid changes possible. On two Landsat-MODIS datasets with annual and monthly changes, respectively, we apply the proposed spatial-temporal fusion methods to the task of multiple change detection. Afterward, we propose a novel spatial and spectral fusion method for satellite multispectral and hyperspectral (or high-spectral) images based on dictionary-pair learning and sparse non-negative matrix factorization. By combining the spectral information from hyperspectral image, which is characterized by low spatial resolution but high spectral resolution and abbreviated as LSHS, and the spatial information from multispectral image, which is featured by high spatial resolution but low spectral resolution and abbreviated as HSLS, this method aims to generate the fused data with both high spatial and high spectral resolutions. Motivated by the observation that each hyperspectral pixel can be represented by a linear combination of a few endmembers, this method first extracts the spectral bases of LSHS and HSLS images by making full use of the rich spectral information in LSHS data. The spectral bases of these two categories data then formulate a dictionary-pair due to their correspondence in representing each pixel spectra of LSHS data and HSLS data, respectively. Subsequently, the LSHS image is spatially unmixed by representing the HSLS image with respect to the corresponding learned dictionary to derive its representation coefficients. Combining the spectral bases of LSHS data and the representation coefficients of HSLS data, we finally derive the fused data characterized by the spectral resolution of LSHS data and the spatial resolution of HSLS data.
Use of high resolution satellite images for monitoring of earthquakes and volcano activity.
NASA Astrophysics Data System (ADS)
Arellano-Baeza, Alonso A.
Our studies have shown that the strain energy accumulation deep in the Earth's crust that precedes a strong earthquake can be detected by applying a lineament extraction technique to the high-resolution multispectral satellite images. A lineament is a straight or a somewhat curved feature in a satellite image, which it is possible to detect by a special processing of images based on directional filtering and or Hough transform. We analyzed tens of earthquakes occurred in the Pacific coast of the South America with the Richter scale magnitude ˜4.5, using ASTER/TERRA multispectral satellite images for detection and analysis of changes in the system of lineaments previous to a strong earthquake. All events were located in the regions with small seasonal variations and limited vegetation to facilitate the tracking of features associated with the seismic activity only. It was found that the number and orientation of lineaments changed significantly about one month before an earthquake approximately, and a few months later the system returns to its initial state. This effect increases with the earthquake magnitude. It also was shown that the behavior of lineaments associated to the volcano seismic activity is opposite to that obtained previously for earthquakes. This discrepancy can be explained assuming that in the last case the main reason of earthquakes is compression and accumulation of strength in the Earth's crust due to subduction of tectonic plates, whereas in the first case we deal with the inflation of a volcano edifice due to elevation of pressure and magma intrusion. The results obtained made it possible to include this research as a part of scientific program of Chilean Remote Sensing Satellite mission to be launched in 2010.
NASA Astrophysics Data System (ADS)
Ren, Ruizhi; Gu, Lingjia; Fu, Haoyang; Sun, Chenglin
2017-04-01
An effective super-resolution (SR) algorithm is proposed for actual spectral remote sensing images based on sparse representation and wavelet preprocessing. The proposed SR algorithm mainly consists of dictionary training and image reconstruction. Wavelet preprocessing is used to establish four subbands, i.e., low frequency, horizontal, vertical, and diagonal high frequency, for an input image. As compared to the traditional approaches involving the direct training of image patches, the proposed approach focuses on the training of features derived from these four subbands. The proposed algorithm is verified using different spectral remote sensing images, e.g., moderate-resolution imaging spectroradiometer (MODIS) images with different bands, and the latest Chinese Jilin-1 satellite images with high spatial resolution. According to the visual experimental results obtained from the MODIS remote sensing data, the SR images using the proposed SR algorithm are superior to those using a conventional bicubic interpolation algorithm or traditional SR algorithms without preprocessing. Fusion algorithms, e.g., standard intensity-hue-saturation, principal component analysis, wavelet transform, and the proposed SR algorithms are utilized to merge the multispectral and panchromatic images acquired by the Jilin-1 satellite. The effectiveness of the proposed SR algorithm is assessed by parameters such as peak signal-to-noise ratio, structural similarity index, correlation coefficient, root-mean-square error, relative dimensionless global error in synthesis, relative average spectral error, spectral angle mapper, and the quality index Q4, and its performance is better than that of the standard image fusion algorithms.
NASA Astrophysics Data System (ADS)
Clark, M. L.
2016-12-01
The goal of this study was to assess multi-temporal, Hyperspectral Infrared Imager (HyspIRI) satellite imagery for improved forest class mapping relative to multispectral satellites. The study area was the western San Francisco Bay Area, California and forest alliances (e.g., forest communities defined by dominant or co-dominant trees) were defined using the U.S. National Vegetation Classification System. Simulated 30-m HyspIRI, Landsat 8 and Sentinel-2 imagery were processed from image data acquired by NASA's AVIRIS airborne sensor in year 2015, with summer and multi-temporal (spring, summer, fall) data analyzed separately. HyspIRI reflectance was used to generate a suite of hyperspectral metrics that targeted key spectral features related to chemical and structural properties. The Random Forests classifier was applied to the simulated images and overall accuracies (OA) were compared to those from real Landsat 8 images. For each image group, broad land cover (e.g., Needle-leaf Trees, Broad-leaf Trees, Annual agriculture, Herbaceous, Built-up) was classified first, followed by a finer-detail forest alliance classification for pixels mapped as closed-canopy forest. There were 5 needle-leaf tree alliances and 16 broad-leaf tree alliances, including 7 Quercus (oak) alliance types. No forest alliance classification exceeded 50% OA, indicating that there was broad spectral similarity among alliances, most of which were not spectrally pure but rather a mix of tree species. In general, needle-leaf (Pine, Redwood, Douglas Fir) alliances had better class accuracies than broad-leaf alliances (Oaks, Madrone, Bay Laurel, Buckeye, etc). Multi-temporal data classifications all had 5-6% greater OA than with comparable summer data. For simulated data, HyspIRI metrics had 4-5% greater OA than Landsat 8 and Sentinel-2 multispectral imagery and 3-4% greater OA than HyspIRI reflectance. Finally, HyspIRI metrics had 8% greater OA than real Landsat 8 imagery. In conclusion, forest alliance classification was found to be a difficult remote sensing application with moderate resolution (30 m) satellite imagery; however, of the data tested, HyspIRI spectral metrics had the best performance relative to multispectral satellites.
NASA Astrophysics Data System (ADS)
Zhou, Tingting; Gu, Lingjia; Ren, Ruizhi; Cao, Qiong
2016-09-01
With the rapid development of remote sensing technology, the spatial resolution and temporal resolution of satellite imagery also have a huge increase. Meanwhile, High-spatial-resolution images are becoming increasingly popular for commercial applications. The remote sensing image technology has broad application prospects in intelligent traffic. Compared with traditional traffic information collection methods, vehicle information extraction using high-resolution remote sensing image has the advantages of high resolution and wide coverage. This has great guiding significance to urban planning, transportation management, travel route choice and so on. Firstly, this paper preprocessed the acquired high-resolution multi-spectral and panchromatic remote sensing images. After that, on the one hand, in order to get the optimal thresholding for image segmentation, histogram equalization and linear enhancement technologies were applied into the preprocessing results. On the other hand, considering distribution characteristics of road, the normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used to suppress water and vegetation information of preprocessing results. Then, the above two processing result were combined. Finally, the geometric characteristics were used to completed road information extraction. The road vector extracted was used to limit the target vehicle area. Target vehicle extraction was divided into bright vehicles extraction and dark vehicles extraction. Eventually, the extraction results of the two kinds of vehicles were combined to get the final results. The experiment results demonstrated that the proposed algorithm has a high precision for the vehicle information extraction for different high resolution remote sensing images. Among these results, the average fault detection rate was about 5.36%, the average residual rate was about 13.60% and the average accuracy was approximately 91.26%.
NASA Technical Reports Server (NTRS)
Ripple, W. J.; Wang, S.; Isaacson, D. L.; Paine, D. P.
1991-01-01
Digital Landsat Thematic Mapper (TM) and SPOT high-resolution visible (HRV) images of coniferous forest canopies were compared in their relationship to forest wood volume using correlation and regression analyses. Significant inverse relationships were found between softwood volume and the spectral bands from both sensors (P less than 0.01). The highest correlations were between the log of softwood volume and the near-infrared bands.
Comparison of satellite reflectance algorithms for estimating ...
We analyzed 10 established and 4 new satellite reflectance algorithms for estimating chlorophyll-a (Chl-a) in a temperate reservoir in southwest Ohio using coincident hyperspectral aircraft imagery and dense water truth collected within one hour of image acquisition to develop simple proxies for algal blooms and to facilitate portability between multispectral satellite imagers for regional algal bloom monitoring. Narrow band hyperspectral aircraft images were upscaled spectrally and spatially to simulate 5 current and near future satellite imaging systems. Established and new Chl-a algorithms were then applied to the synthetic satellite images and then compared to calibrated Chl-a water truth measurements collected from 44 sites within one hour of aircraft acquisition of the imagery. Masks based on the spatial resolution of the synthetic satellite imagery were then applied to eliminate mixed pixels including vegetated shorelines. Medium-resolution Landsat and finer resolution data were evaluated against 29 coincident water truth sites. Coarse-resolution MODIS and MERIS-like data were evaluated against 9 coincident water truth sites. Each synthetic satellite data set was then evaluated for the performance of a variety of spectrally appropriate algorithms with regard to the estimation of Chl-a concentrations against the water truth data set. The goal is to inform water resource decisions on the appropriate satellite data acquisition and processing for the es
NASA Astrophysics Data System (ADS)
Navarro, Gabriel; Vicent, Jorge; Caballero, Isabel; Gómez-Enri, Jesús; Morris, Edward P.; Sabater, Neus; Macías, Diego; Bolado-Penagos, Marina; Gomiz, Juan Jesús; Bruno, Miguel; Caldeira, Rui; Vázquez, Águeda
2018-05-01
High Amplitude Internal Waves (HAIWs) are physical processes observed in the Strait of Gibraltar (the narrow channel between the Atlantic Ocean and the Mediterranean Sea). These internal waves are generated over the Camarinal Sill (western side of the strait) during the tidal outflow (toward the Atlantic Ocean) when critical hydraulic conditions are established. HAIWs remain over the sill for up to 4 h until the outflow slackens, being then released (mostly) towards the Mediterranean Sea. These have been previously observed using Synthetic Aperture Radar (SAR), which captures variations in surface water roughness. However, in this work we use high resolution optical remote sensing, with the aim of examining the influence of HAIWs on biogeochemical processes. We used hyperspectral images from the Hyperspectral Imager for the Coastal Ocean (HICO) and high spatial resolution (10 m) images from the MultiSpectral Instrument (MSI) onboard the Sentinel-2A satellite. This work represents the first attempt to examine the relation between internal wave generation and the water constituents of the Camarinal Sill using hyperspectral and high spatial resolution remote sensing images. This enhanced spatial and spectral resolution revealed the detailed biogeochemical patterns associated with the internal waves and suggests local enhancements of productivity associated with internal waves trains.
Aircraft MSS data registration and vegetation classification of wetland change detection
Christensen, E.J.; Jensen, J.R.; Ramsey, Elijah W.; Mackey, H.E.
1988-01-01
Portions of the Savannah River floodplain swamp were evaluated for vegetation change using high resolution (5a??6 m) aircraft multispectral scanner (MSS) data. Image distortion from aircraft movement prevented precise image-to-image registration in some areas. However, when small scenes were used (200-250 ha), a first-order linear transformation provided registration accuracies of less than or equal to one pixel. A larger area was registered using a piecewise linear method. Five major wetland classes were identified and evaluated for change. Phenological differences and the variable distribution of vegetation limited wetland type discrimination. Using unsupervised methods and ground-collected vegetation data, overall classification accuracies ranged from 84 per cent to 87 per cent for each scene. Results suggest that high-resolution aircraft MSS data can be precisely registered, if small areas are used, and that wetland vegetation change can be accurately detected and monitored.
Deliolanis, Nikolaos C; Ale, Angelique; Morscher, Stefan; Burton, Neal C; Schaefer, Karin; Radrich, Karin; Razansky, Daniel; Ntziachristos, Vasilis
2014-10-01
A primary enabling feature of near-infrared fluorescent proteins (FPs) and fluorescent probes is the ability to visualize deeper in tissues than in the visible. The purpose of this work is to find which is the optimal visualization method that can exploit the advantages of this novel class of FPs in full-scale pre-clinical molecular imaging studies. Nude mice were stereotactically implanted with near-infrared FP expressing glioma cells to from brain tumors. The feasibility and performance metrics of FPs were compared between planar epi-illumination and trans-illumination fluorescence imaging, as well as to hybrid Fluorescence Molecular Tomography (FMT) system combined with X-ray CT and Multispectral Optoacoustic (or Photoacoustic) Tomography (MSOT). It is shown that deep-seated glioma brain tumors are possible to visualize both with fluorescence and optoacoustic imaging. Fluorescence imaging is straightforward and has good sensitivity; however, it lacks resolution. FMT-XCT can provide an improved rough resolution of ∼1 mm in deep tissue, while MSOT achieves 0.1 mm resolution in deep tissue and has comparable sensitivity. We show imaging capacity that can shift the visualization paradigm in biological discovery. The results are relevant not only to reporter gene imaging, but stand as cross-platform comparison for all methods imaging near infrared fluorescent contrast agents.
NASA Astrophysics Data System (ADS)
Arellano-Baeza, A. A.; Garcia, R. V.; Trejo-Soto, M.; Molina-Sauceda, E.
Mexico is one of the most volcanically active regions in North America Volcanic activity in central Mexico is associated with the subduction of the Cocos and Rivera plates beneath the North American plate Periods of enhanced microseismic activity associated with the volcanic activity of the Colima and Popocapetl volcanoes are compared to some periods of low microseismic activity We detected changes in the number and orientation of lineaments associated with the microseismic activity due to lineament analysis of a temporal sequence of high resolution satellite images of both volcanoes 15 m resolution multispectral images provided by the ASTER VNIR instrument were used The Lineament Extraction and Stripes Statistic Analysis LESSA software package was employed for the lineament extraction
Multi-spectral endogenous fluorescence imaging for bacterial differentiation
NASA Astrophysics Data System (ADS)
Chernomyrdin, Nikita V.; Babayants, Margarita V.; Korotkov, Oleg V.; Kudrin, Konstantin G.; Rimskaya, Elena N.; Shikunova, Irina A.; Kurlov, Vladimir N.; Cherkasova, Olga P.; Komandin, Gennady A.; Reshetov, Igor V.; Zaytsev, Kirill I.
2017-07-01
In this paper, the multi-spectral endogenous fluorescence imaging was implemented for bacterial differentiation. The fluorescence imaging was performed using a digital camera equipped with a set of visual bandpass filters. Narrowband 365 nm ultraviolet radiation passed through a beam homogenizer was used to excite the sample fluorescence. In order to increase a signal-to-noise ratio and suppress a non-fluorescence background in images, the intensity of the UV excitation was modulated using a mechanical chopper. The principal components were introduced for differentiating the samples of bacteria based on the multi-spectral endogenous fluorescence images.
NASA Astrophysics Data System (ADS)
Murray, R.; Neale, C.; Nagler, P. L.; Glenn, E. P.
2008-12-01
Heat-balance sap flow sensors provide direct estimates of water movement through plant stems and can be used to accurately measure leaf-level transpiration (EL) and stomatal conductance (GS) over time scales ranging from 20-minutes to a month or longer in natural stands of plants. However, their use is limited to relatively small branches on shrubs or trees, as the gauged stem section needs to be uniformly heated by the heating coil to produce valid measurements. This presents a scaling problem in applying the results to whole plants, stands of plants, and larger landscape areas. We used high-resolution aerial multispectral digital imaging with green, red and NIR bands as a bridge between ground measurements of EL and GS, and MODIS satellite imagery of a flood plain on the Lower Colorado River dominated by saltcedar (Tamarix ramosissima). Saltcedar is considered to be a high-water-use plant, and saltcedar removal programs have been proposed to salvage water. Hence, knowledge of actual saltcedar ET rates is needed on western U.S. rivers. Scaling EL and GS to large landscape units requires knowledge of leaf area index (LAI) over large areas. We used a LAI model developed for riparian habitats on Bosque del Apache, New Mexico, to estimate LAI at our study site on the Colorado River. We compared the model estimates to ground measurements of LAI, determined with a Li-Cor LAI-2000 Plant Canopy Analyzer calibrated by leaf harvesting to determine Specific Leaf Area (SLA) (m2 leaf area per g dry weight leaves) of the different species on the floodplain. LAI could be adequately predicted from NDVI from aerial multispectral imagery and could be cross-calibrated with MODIS NDVI and EVI. Hence, we were able to project point measurements of sap flow and LAI over multiple years and over large areas of floodplain using aerial multispectral imagery as a bridge between ground and satellite data. The methods are applicable to riparian corridors throughout the western U.S.
Gustafson, J. Olaf; Bell, James F.; Gaddis, Lisa R.R.; Hawke, B. Ray Ray; Giguere, Thomas A.
2012-01-01
We used a Lunar Reconnaissance Orbiter Camera (LROC) global monochrome Wide-angle Camera (WAC) mosaic to conduct a survey of the Moon to search for previously unidentified pyroclastic deposits. Promising locations were examined in detail using LROC multispectral WAC mosaics, high-resolution LROC Narrow Angle Camera (NAC) images, and Clementine multispectral (ultraviolet-visible or UVVIS) data. Out of 47 potential deposits chosen for closer examination, 12 were selected as probable newly identified pyroclastic deposits. Potential pyroclastic deposits were generally found in settings similar to previously identified deposits, including areas within or near mare deposits adjacent to highlands, within floor-fractured craters, and along fissures in mare deposits. However, a significant new finding is the discovery of localized pyroclastic deposits within floor-fractured craters Anderson E and F on the lunar farside, isolated from other known similar deposits. Our search confirms that most major regional and localized low-albedo pyroclastic deposits have been identified on the Moon down to ~100 m/pix resolution, and that additional newly identified deposits are likely to be either isolated small deposits or additional portions of discontinuous, patchy deposits.
Multispectral THz-VIS passive imaging system for hidden threats visualization
NASA Astrophysics Data System (ADS)
Kowalski, Marcin; Palka, Norbert; Szustakowski, Mieczyslaw
2013-10-01
Terahertz imaging, is the latest entry into the crowded field of imaging technologies. Many applications are emerging for the relatively new technology. THz radiation penetrates deep into nonpolar and nonmetallic materials such as paper, plastic, clothes, wood, and ceramics that are usually opaque at optical wavelengths. The T-rays have large potential in the field of hidden objects detection because it is not harmful to humans. The main difficulty in the THz imaging systems is low image quality thus it is justified to combine THz images with the high-resolution images from a visible camera. An imaging system is usually composed of various subsystems. Many of the imaging systems use imaging devices working in various spectral ranges. Our goal is to build a system harmless to humans for screening and detection of hidden objects using a THz and VIS cameras.
Mars Exploration Rover Athena Panoramic Camera (Pancam) investigation
Bell, J.F.; Squyres, S. W.; Herkenhoff, K. E.; Maki, J.N.; Arneson, H.M.; Brown, D.; Collins, S.A.; Dingizian, A.; Elliot, S.T.; Hagerott, E.C.; Hayes, A.G.; Johnson, M.J.; Johnson, J. R.; Joseph, J.; Kinch, K.; Lemmon, M.T.; Morris, R.V.; Scherr, L.; Schwochert, M.; Shepard, M.K.; Smith, G.H.; Sohl-Dickstein, J. N.; Sullivan, R.J.; Sullivan, W.T.; Wadsworth, M.
2003-01-01
The Panoramic Camera (Pancam) investigation is part of the Athena science payload launched to Mars in 2003 on NASA's twin Mars Exploration Rover (MER) missions. The scientific goals of the Pancam investigation are to assess the high-resolution morphology, topography, and geologic context of each MER landing site, to obtain color images to constrain the mineralogic, photometric, and physical properties of surface materials, and to determine dust and aerosol opacity and physical properties from direct imaging of the Sun and sky. Pancam also provides mission support measurements for the rovers, including Sun-finding for rover navigation, hazard identification and digital terrain modeling to help guide long-term rover traverse decisions, high-resolution imaging to help guide the selection of in situ sampling targets, and acquisition of education and public outreach products. The Pancam optical, mechanical, and electronics design were optimized to achieve these science and mission support goals. Pancam is a multispectral, stereoscopic, panoramic imaging system consisting of two digital cameras mounted on a mast 1.5 m above the Martian surface. The mast allows Pancam to image the full 360?? in azimuth and ??90?? in elevation. Each Pancam camera utilizes a 1024 ?? 1024 active imaging area frame transfer CCD detector array. The Pancam optics have an effective focal length of 43 mm and a focal ratio f/20, yielding an instantaneous field of view of 0.27 mrad/pixel and a field of view of 16?? ?? 16??. Each rover's two Pancam "eyes" are separated by 30 cm and have a 1?? toe-in to provide adequate stereo parallax. Each eye also includes a small eight position filter wheel to allow surface mineralogic studies, multispectral sky imaging, and direct Sun imaging in the 400-1100 nm wavelength region. Pancam was designed and calibrated to operate within specifications on Mars at temperatures from -55?? to +5??C. An onboard calibration target and fiducial marks provide the capability to validate the radiometric and geometric calibration on Mars. Copyright 2003 by the American Geophysical Union.
Mineral mapping and applications of imaging spectroscopy
Clark, R.N.; Boardman, J.; Mustard, J.; Kruse, F.; Ong, C.; Pieters, C.; Swayze, G.A.
2006-01-01
Spectroscopy is a tool that has been used for decades to identify, understand, and quantify solid, liquid, or gaseous materials, especially in the laboratory. In disciplines ranging from astronomy to chemistry, spectroscopic measurements are used to detect absorption and emission features due to specific chemical bonds, and detailed analyses are used to determine the abundance and physical state of the detected absorbing/emitting species. Spectroscopic measurements have a long history in the study of the Earth and planets. Up to the 1990s remote spectroscopic measurements of Earth and planets were dominated by multispectral imaging experiments that collect high-quality images in a few, usually broad, spectral bands or with point spectrometers that obtained good spectral resolution but at only a few spatial positions. However, a new generation of sensors is now available that combines imaging with spectroscopy to create the new discipline of imaging spectroscopy. Imaging spectrometers acquire data with enough spectral range, resolution, and sampling at every pixel in a raster image so that individual absorption features can be identified and spatially mapped (Goetz et al., 1985).
NASA Astrophysics Data System (ADS)
Klaessens, John H. G. M.; Nelisse, Martin; Verdaasdonk, Rudolf M.; Noordmans, Herke Jan
2013-03-01
Clinical interventions can cause changes in tissue perfusion, oxygenation or temperature. Real-time imaging of these phenomena could be useful for surgical strategy or understanding of physiological regulation mechanisms. Two noncontact imaging techniques were applied for imaging of large tissue areas: LED based multispectral imaging (MSI, 17 different wavelengths 370 nm-880 nm) and thermal imaging (7.5 to 13.5 μm). Oxygenation concentration changes were calculated using different analyzing methods. The advantages of these methods are presented for stationary and dynamic applications. Concentration calculations of chromophores in tissue require right choices of wavelengths The effects of different wavelength choices for hemoglobin concentration calculations were studied in laboratory conditions and consequently applied in clinical studies. Corrections for interferences during the clinical registrations (ambient light fluctuations, tissue movements) were performed. The wavelength dependency of the algorithms were studied and wavelength sets with the best results will be presented. The multispectral and thermal imaging systems were applied during clinical intervention studies: reperfusion of tissue flap transplantation (ENT), effectiveness of local anesthetic block and during open brain surgery in patients with epileptic seizures. The LED multispectral imaging system successfully imaged the perfusion and oxygenation changes during clinical interventions. The thermal images show local heat distributions over tissue areas as a result of changes in tissue perfusion. Multispectral imaging and thermal imaging provide complementary information and are promising techniques for real-time diagnostics of physiological processes in medicine.
Survey of the Pompeii (IT) archaeological Regions with the multispectral thermal airborne TASI data
NASA Astrophysics Data System (ADS)
Pignatti, Stefano; Palombo, Angelo; Pascucci, Simone; Santini, Federico; Laneve, Giovanni
2017-04-01
Thermal remote sensing, as a tool for analyzing environmental variables with regards to archaeological prospecting, has been growing ever mainly because airborne surveys allow to provide to archaeologists images at meter scale. The importance of this study lies in the evaluation of TIR imagery in view of the use of unmanned aerial vehicles (UAVs) imagery, for the Conservation of Cultural Heritage, that should provide at low cost very high spatial resolution thermal imaging. The research aims at analyzing the potential of the thermal imaging [1] on some selected areas of the Pompeii archaeological park. To this purpose, on December the 7th, 2015, a TASI-600, an [2] airborne multispectral thermal imagery (32 channels from 8 to 11.5 nm with a spectral resolution of 100nm and a spatial resolution of 1m/pixel) has surveyed the archaeological Pompeii Regions. Thermal images have been corrected, calibrated in order to obtain land surface temperatures (LST) and emissivity data set to be applied for the further analysis. The thermal data pre-processing has included: ii) radiometric calibration of the raw data and the correction of the blinking pixel; ii) atmospheric correction performed by using MODTRAN; iii) Temperature Emissivity Separation (TES) to obtain emissivity and LST maps [3]. Our objective is to shows the major results of the IR survey, the pre-processing of the multispectral thermal imagery. LST and emissivity maps have been analysed to describe the thermal/emissivity pattern of the different Regions as function of the presence, in first subsurface, of archaeological features. The obtained preliminary results are encouraging, even though, the vegetation cover, covering the different Pompeii Regions, is one of the major issues affecting the usefulness of the TIR sensing. Of course, LST anomalies and emissivity maps need to be further integrated with the classical geophysical investigation techniques to have a complete validation and to better evaluate the usefulness of the IR sensing References 1. Pascucci S., Cavalli R M., Palombo A. & Pignatti S. (2010), Suitability of CASI and ATM airborne remote sensing data for archaeological subsurface structure detection under different land cover: the Arpi case study (Italy). In Journal of Geophysics and Engineering, Vol. 7 (2), pp. 183-189. 2. Pignatti, S.; Lapenna, V.; Palombo, A.; Pascucci, S.; Pergola, N.; Cuomo, V. 2011. An advanced tool of the CNR IMAA EO facilities: Overview of the TASI-600 hyperspectral thermal spectrometer. 3rd Hyperspectral Image and Signal Processing: Evolution in Remote Sensing Conference (WHISPERS), 2011; DOI 10.1109/WHISPERS.2011.6080890. 3. Z.L. Li, F. Becker, M.P Stoll and Z. Wan. 1999. Evaluation of six methods for extracting relative emissivity spectra from thermal infrared images. Remote Sensing of Environment, vol. 69, 197-214.
The Multispectral Imaging Science Working Group. Volume 3: Appendices
NASA Technical Reports Server (NTRS)
Cox, S. C. (Editor)
1982-01-01
The status and technology requirements for using multispectral sensor imagery in geographic, hydrologic, and geologic applications are examined. Critical issues in image and information science are identified.
Sandison, David R.; Platzbecker, Mark R.; Descour, Michael R.; Armour, David L.; Craig, Marcus J.; Richards-Kortum, Rebecca
1999-01-01
A multispectral imaging probe delivers a range of wavelengths of excitation light to a target and collects a range of expressed light wavelengths. The multispectral imaging probe is adapted for mobile use and use in confined spaces, and is sealed against the effects of hostile environments. The multispectral imaging probe comprises a housing that defines a sealed volume that is substantially sealed from the surrounding environment. A beam splitting device mounts within the sealed volume. Excitation light is directed to the beam splitting device, which directs the excitation light to a target. Expressed light from the target reaches the beam splitting device along a path coaxial with the path traveled by the excitation light from the beam splitting device to the target. The beam splitting device directs expressed light to a collection subsystem for delivery to a detector.
Sandison, D.R.; Platzbecker, M.R.; Descour, M.R.; Armour, D.L.; Craig, M.J.; Richards-Kortum, R.
1999-07-27
A multispectral imaging probe delivers a range of wavelengths of excitation light to a target and collects a range of expressed light wavelengths. The multispectral imaging probe is adapted for mobile use and use in confined spaces, and is sealed against the effects of hostile environments. The multispectral imaging probe comprises a housing that defines a sealed volume that is substantially sealed from the surrounding environment. A beam splitting device mounts within the sealed volume. Excitation light is directed to the beam splitting device, which directs the excitation light to a target. Expressed light from the target reaches the beam splitting device along a path coaxial with the path traveled by the excitation light from the beam splitting device to the target. The beam splitting device directs expressed light to a collection subsystem for delivery to a detector. 8 figs.
Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong
2016-07-01
Computer based diagnosis of Alzheimer's disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer's disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhateja, Vikrant, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn; Moin, Aisha; Srivastava, Anuja
Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Componentmore » Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).« less
Multispectral image fusion for illumination-invariant palmprint recognition
Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied. PMID:28558064
Multispectral image fusion for illumination-invariant palmprint recognition.
Lu, Longbin; Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied.
NASA Astrophysics Data System (ADS)
Krauß, T.
2014-11-01
The focal plane assembly of most pushbroom scanner satellites is built up in a way that different multispectral or multispectral and panchromatic bands are not all acquired exactly at the same time. This effect is due to offsets of some millimeters of the CCD-lines in the focal plane. Exploiting this special configuration allows the detection of objects moving during this small time span. In this paper we present a method for automatic detection and extraction of moving objects - mainly traffic - from single very high resolution optical satellite imagery of different sensors. The sensors investigated are WorldView-2, RapidEye, Pléiades and also the new SkyBox satellites. Different sensors require different approaches for detecting moving objects. Since the objects are mapped on different positions only in different spectral bands also the change of spectral properties have to be taken into account. In case the main distance in the focal plane is between the multispectral and the panchromatic CCD-line like for Pléiades an approach for weighted integration to receive mostly identical images is investigated. Other approaches for RapidEye and WorldView-2 are also shown. From these intermediate bands difference images are calculated and a method for detecting the moving objects from these difference images is proposed. Based on these presented methods images from different sensors are processed and the results are assessed for detection quality - how many moving objects can be detected, how many are missed - and accuracy - how accurate is the derived speed and size of the objects. Finally the results are discussed and an outlook for possible improvements towards operational processing is presented.
Galileo SSI lunar observations: Copernican craters and soils
NASA Technical Reports Server (NTRS)
Mcewen, A. S.; Greeley, R.; Head, James W.; Pieters, C. M.; Fischer, E. M.; Johnson, T. V.; Neukum, G.
1993-01-01
The Galileo spacecraft completed its first Earth-Moon flyby (EMI) in December 1990 and its second flyby (EM2) in December 1992. Copernican-age craters are among the most prominent features seen in the SSI (Solid-State Imaging) multispectral images of the Moon. The interiors, rays, and continuous ejecta deposits of these youngest craters stand out as the brightest features in images of albedo and visible/1-micron color ratios (except where impact melts are abundant). Crater colors and albedos (away from impact melts) are correlated with their geologic emplacement ages as determined from counts of superposed craters; these age-color relations can be used to estimate the emplacement age (time since impact event) for many Copernican-age craters on the near and far sides of the Moon. The spectral reflectivities of lunar soils are controlled primarily by (1) soil maturity, resulting from the soil's cumulative age of exposure to the space environment; (2) steady-state horizontal and vertical mixing of fresh crystalline materials ; and (3) the mineralogy of the underlying bedrock or megaregolith. Improved understanding of items (1) and (2) above will improve our ability to interpret item (3), especially for the use of crater compositions as probes of crustal stratigraphy. We have examined the multispectral and superposed crater frequencies of large isolated craters, mostly of Eratosthenian and Copernican ages, to avoid complications due to (1) secondaries (as they affect superposed crater counts) and (2) spatially and temporally nonuniform regolith mixing from younger, large, and nearby impacts. Crater counts are available for 11 mare craters and 9 highlands craters within the region of the Moon imaged during EM1. The EM2 coverage provides multispectral data for 10 additional craters with superposed crater counts. Also, the EM2 data provide improved spatial resolution and signal-to-noise ratios over the western nearside.
Image Classification Workflow Using Machine Learning Methods
NASA Astrophysics Data System (ADS)
Christoffersen, M. S.; Roser, M.; Valadez-Vergara, R.; Fernández-Vega, J. A.; Pierce, S. A.; Arora, R.
2016-12-01
Recent increases in the availability and quality of remote sensing datasets have fueled an increasing number of scientifically significant discoveries based on land use classification and land use change analysis. However, much of the software made to work with remote sensing data products, specifically multispectral images, is commercial and often prohibitively expensive. The free to use solutions that are currently available come bundled up as small parts of much larger programs that are very susceptible to bugs and difficult to install and configure. What is needed is a compact, easy to use set of tools to perform land use analysis on multispectral images. To address this need, we have developed software using the Python programming language with the sole function of land use classification and land use change analysis. We chose Python to develop our software because it is relatively readable, has a large body of relevant third party libraries such as GDAL and Spectral Python, and is free to install and use on Windows, Linux, and Macintosh operating systems. In order to test our classification software, we performed a K-means unsupervised classification, Gaussian Maximum Likelihood supervised classification, and a Mahalanobis Distance based supervised classification. The images used for testing were three Landsat rasters of Austin, Texas with a spatial resolution of 60 meters for the years of 1984 and 1999, and 30 meters for the year 2015. The testing dataset was easily downloaded using the Earth Explorer application produced by the USGS. The software should be able to perform classification based on any set of multispectral rasters with little to no modification. Our software makes the ease of land use classification using commercial software available without an expensive license.
Multispectral imaging method and apparatus
Sandison, D.R.; Platzbecker, M.R.; Vargo, T.D.; Lockhart, R.R.; Descour, M.R.; Richards-Kortum, R.
1999-07-06
A multispectral imaging method and apparatus are described which are adapted for use in determining material properties, especially properties characteristic of abnormal non-dermal cells. A target is illuminated with a narrow band light beam. The target expresses light in response to the excitation. The expressed light is collected and the target's response at specific response wavelengths to specific excitation wavelengths is measured. From the measured multispectral response the target's properties can be determined. A sealed, remote probe and robust components can be used for cervical imaging. 5 figs.
Multispectral imaging method and apparatus
Sandison, David R.; Platzbecker, Mark R.; Vargo, Timothy D.; Lockhart, Randal R.; Descour, Michael R.; Richards-Kortum, Rebecca
1999-01-01
A multispectral imaging method and apparatus adapted for use in determining material properties, especially properties characteristic of abnormal non-dermal cells. A target is illuminated with a narrow band light beam. The target expresses light in response to the excitation. The expressed light is collected and the target's response at specific response wavelengths to specific excitation wavelengths is measured. From the measured multispectral response the target's properties can be determined. A sealed, remote probe and robust components can be used for cervical imaging
Experimental Demonstration of Adaptive Infrared Multispectral Imaging using Plasmonic Filter Array.
Jang, Woo-Yong; Ku, Zahyun; Jeon, Jiyeon; Kim, Jun Oh; Lee, Sang Jun; Park, James; Noyola, Michael J; Urbas, Augustine
2016-10-10
In our previous theoretical study, we performed target detection using a plasmonic sensor array incorporating the data-processing technique termed "algorithmic spectrometry". We achieved the reconstruction of a target spectrum by extracting intensity at multiple wavelengths with high resolution from the image data obtained from the plasmonic array. The ultimate goal is to develop a full-scale focal plane array with a plasmonic opto-coupler in order to move towards the next generation of versatile infrared cameras. To this end, and as an intermediate step, this paper reports the experimental demonstration of adaptive multispectral imagery using fabricated plasmonic spectral filter arrays and proposed target detection scenarios. Each plasmonic filter was designed using periodic circular holes perforated through a gold layer, and an enhanced target detection strategy was proposed to refine the original spectrometry concept for spatial and spectral computation of the data measured from the plasmonic array. Both the spectrum of blackbody radiation and a metal ring object at multiple wavelengths were successfully reconstructed using the weighted superposition of plasmonic output images as specified in the proposed detection strategy. In addition, plasmonic filter arrays were theoretically tested on a target at extremely high temperature as a challenging scenario for the detection scheme.
Ma, Dinglong; Bec, Julien; Yankelevich, Diego R.; Gorpas, Dimitris; Fatakdawala, Hussain; Marcu, Laura
2014-01-01
Abstract. We report the development and validation of a hybrid intravascular diagnostic system combining multispectral fluorescence lifetime imaging (FLIm) and intravascular ultrasound (IVUS) for cardiovascular imaging applications. A prototype FLIm system based on fluorescence pulse sampling technique providing information on artery biochemical composition was integrated with a commercial IVUS system providing information on artery morphology. A customized 3-Fr bimodal catheter combining a rotational side-view fiberoptic and a 40-MHz IVUS transducer was constructed for sequential helical scanning (rotation and pullback) of tubular structures. Validation of this bimodal approach was conducted in pig heart coronary arteries. Spatial resolution, fluorescence detection efficiency, pulse broadening effect, and lifetime measurement variability of the FLIm system were systematically evaluated. Current results show that this system is capable of temporarily resolving the fluorescence emission simultaneously in multiple spectral channels in a single pullback sequence. Accurate measurements of fluorescence decay characteristics from arterial segments can be obtained rapidly (e.g., 20 mm in 5 s), and accurate co-registration of fluorescence and ultrasound features can be achieved. The current finding demonstrates the compatibility of FLIm instrumentation with in vivo clinical investigations and its potential to complement conventional IVUS during catheterization procedures. PMID:24898604
Determining fast orientation changes of multi-spectral line cameras from the primary images
NASA Astrophysics Data System (ADS)
Wohlfeil, Jürgen
2012-01-01
Fast orientation changes of airborne and spaceborne line cameras cannot always be avoided. In such cases it is essential to measure them with high accuracy to ensure a good quality of the resulting imagery products. Several approaches exist to support the orientation measurement by using optical information received through the main objective/telescope. In this article an approach is proposed that allows the determination of non-systematic orientation changes between every captured line. It does not require any additional camera hardware or onboard processing capabilities but the payload images and a rough estimate of the camera's trajectory. The approach takes advantage of the typical geometry of multi-spectral line cameras with a set of linear sensor arrays for different spectral bands on the focal plane. First, homologous points are detected within the heavily distorted images of different spectral bands. With their help a connected network of geometrical correspondences can be built up. This network is used to calculate the orientation changes of the camera with the temporal and angular resolution of the camera. The approach was tested with an extensive set of aerial surveys covering a wide range of different conditions and achieved precise and reliable results.
Hyperspectral remote sensing of wild oyster reefs
NASA Astrophysics Data System (ADS)
Le Bris, Anthony; Rosa, Philippe; Lerouxel, Astrid; Cognie, Bruno; Gernez, Pierre; Launeau, Patrick; Robin, Marc; Barillé, Laurent
2016-04-01
The invasion of the wild oyster Crassostrea gigas along the western European Atlantic coast has generated changes in the structure and functioning of intertidal ecosystems. Considered as an invasive species and a trophic competitor of the cultivated conspecific oyster, it is now seen as a resource by oyster farmers following recurrent mass summer mortalities of oyster spat since 2008. Spatial distribution maps of wild oyster reefs are required by local authorities to help define management strategies. In this work, visible-near infrared (VNIR) hyperspectral and multispectral remote sensing was investigated to map two contrasted intertidal reef structures: clusters of vertical oysters building three-dimensional dense reefs in muddy areas and oysters growing horizontally creating large flat reefs in rocky areas. A spectral library, collected in situ for various conditions with an ASD spectroradiometer, was used to run Spectral Angle Mapper classifications on airborne data obtained with an HySpex sensor (160 spectral bands) and SPOT satellite HRG multispectral data (3 spectral bands). With HySpex spectral/spatial resolution, horizontal oysters in the rocky area were correctly classified but the detection was less efficient for vertical oysters in muddy areas. Poor results were obtained with the multispectral image and from spatially or spectrally degraded HySpex data, it was clear that the spectral resolution was more important than the spatial resolution. In fact, there was a systematic mud deposition on shells of vertical oyster reefs explaining the misclassification of 30% of pixels recognized as mud or microphytobenthos. Spatial distribution maps of oyster reefs were coupled with in situ biomass measurements to illustrate the interest of a remote sensing product to provide stock estimations of wild oyster reefs to be exploited by oyster producers. This work highlights the interest of developing remote sensing techniques for aquaculture applications in coastal areas.
Retrieval of cloud cover parameters from multispectral satellite images
NASA Technical Reports Server (NTRS)
Arking, A.; Childs, J. D.
1985-01-01
A technique is described for extracting cloud cover parameters from multispectral satellite radiometric measurements. Utilizing three channels from the AVHRR (Advanced Very High Resolution Radiometer) on NOAA polar orbiting satellites, it is shown that one can retrieve four parameters for each pixel: cloud fraction within the FOV, optical thickness, cloud-top temperature and a microphysical model parameter. The last parameter is an index representing the properties of the cloud particle and is determined primarily by the radiance at 3.7 microns. The other three parameters are extracted from the visible and 11 micron infrared radiances, utilizing the information contained in the two-dimensional scatter plot of the measured radiances. The solution is essentially one in which the distributions of optical thickness and cloud-top temperature are maximally clustered for each region, with cloud fraction for each pixel adjusted to achieve maximal clustering.
Global Visualization (GloVis) Viewer
,
2005-01-01
GloVis (http://glovis.usgs.gov) is a browse image-based search and order tool that can be used to quickly review the land remote sensing data inventories held at the U.S. Geological Survey (USGS) Center for Earth Resources Observation and Science (EROS). GloVis was funded by the AmericaView project to reduce the difficulty of identifying and acquiring data for user-defined study areas. Updated daily with the most recent satellite acquisitions, GloVis displays data in a mosaic, allowing users to select any area of interest worldwide and immediately view all available browse images for the following Landsat data sets: Multispectral Scanner (MSS), Multi-Resolution Land Characteristics (MRLC), Orthorectified, Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+), and ETM+ Scan Line Corrector-off (SLC-off). Other data sets include Terra Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Moderate Resolution Imaging Spectroradiometer (MODIS), Aqua MODIS, and the Earth Observing-1 (EO-1) Advanced Land Imager (ALI) and Hyperion data.
NASA Astrophysics Data System (ADS)
Tatar, N.; Saadatseresht, M.; Arefi, H.
2017-09-01
Semi Global Matching (SGM) algorithm is known as a high performance and reliable stereo matching algorithm in photogrammetry community. However, there are some challenges using this algorithm especially for high resolution satellite stereo images over urban areas and images with shadow areas. As it can be seen, unfortunately the SGM algorithm computes highly noisy disparity values for shadow areas around the tall neighborhood buildings due to mismatching in these lower entropy areas. In this paper, a new method is developed to refine the disparity map in shadow areas. The method is based on the integration of potential of panchromatic and multispectral image data to detect shadow areas in object level. In addition, a RANSAC plane fitting and morphological filtering are employed to refine the disparity map. The results on a stereo pair of GeoEye-1 captured over Qom city in Iran, shows a significant increase in the rate of matched pixels compared to standard SGM algorithm.
An orbiting multispectral scanner for overland and oceanographic applications.
NASA Technical Reports Server (NTRS)
Peacock, K.; Withrington, R. J.
1971-01-01
Description of the major features of a multispectral scanner designed to perform overland and oceanographic surveys from space. The instrument uses an image plane conical scanner and contains independent spectrometers for land and ocean applications. The overland spectrometer has a spatial resolution of 200 ft and has six spectral bands in the atmospheric windows between 0.5 and 2.4 microns. The oceanographic spectrometer has a spatial resolution of 1200 ft and possesses 24 spectral bands equally spaced and in registration over the wavelength range from 0.4 to 0.8 micron. A thermal band of 600-ft resolution is used with a spectral range from 10.5 to 12.6 microns. The swath width of the scan is 100 nautical miles from an altitude of 500 nautical miles. The system has two modes of operation which are selectable by ground command. The six bands of overland data plus the thermal band data can be transmitted, or the 24 bands of oceanographic data plus data from two of the overland bands and the thermal band can be transmitted. The performance is described by the minimum detectable reflectance difference and the effects of sun angle and target reflectivity variations are discussed. The sensitivity is related to the variation of the ocean reflectivity in the presence of chlorophyll and to typical agricultural targets.
Multispectral mapping of the lunar surface using groundbased telescopes
NASA Technical Reports Server (NTRS)
Mccord, T. B.; Pieters, C.; Feirberg, M. A.
1976-01-01
Images of the lunar surface were obtained at several wavelengths using a silicon vidicon imaging system and groundbased telescopes. These images were recorded and processed in digital form so that quantitative information is preserved. The photometric precision of the images is shown to be better than 1 percent. Ratio images calculated by dividing images obtained at two wavelengths (0.40/0.56 micrometer) and 0.95/0.56 micrometer are presented for about 50 percent of the lunar frontside. Spatial resolution is about 2 km at the sub-earth point. A complex of distinct units is evident in the images. Earlier work with the reflectance spectrum of lunar materials indicates that for the most part these units are compositionally distinct. Digital images of this precision are extremely useful to lunar geologists in disentangling the history of the lunar surface.
2014-08-15
challenges. ERDC develops innovative solutions in civil and military engineering, geospatial sciences, water resources, and environmental sciences for...GRL TR-14-1 iv Abstract Orthoimages are used to produce image- map products for navigation and planning, and serve as source data for advanced...resulting mosaic covers a wider area and contains less visible seams, which makes the map easier to understand. RPC replace the actual sensor model while
Multispectral photoacoustic microscopy of lipids using a pulsed supercontinuum laser.
Buma, Takashi; Conley, Nicole C; Choi, Sang Won
2018-01-01
We demonstrate optical resolution photoacoustic microscopy (OR-PAM) of lipid-rich tissue between 1050-1714 nm using a pulsed supercontinuum laser based on a large-mode-area photonic crystal fiber. OR-PAM experiments of lipid-rich samples show the expected optical absorption peaks near 1210 and 1720 nm. These results show that pulsed supercontinuum lasers are promising for OR-PAM applications such as label-free histology of lipid-rich tissue and imaging small animal models of disease.
Intelligent multi-spectral IR image segmentation
NASA Astrophysics Data System (ADS)
Lu, Thomas; Luong, Andrew; Heim, Stephen; Patel, Maharshi; Chen, Kang; Chao, Tien-Hsin; Chow, Edward; Torres, Gilbert
2017-05-01
This article presents a neural network based multi-spectral image segmentation method. A neural network is trained on the selected features of both the objects and background in the longwave (LW) Infrared (IR) images. Multiple iterations of training are performed until the accuracy of the segmentation reaches satisfactory level. The segmentation boundary of the LW image is used to segment the midwave (MW) and shortwave (SW) IR images. A second neural network detects the local discontinuities and refines the accuracy of the local boundaries. This article compares the neural network based segmentation method to the Wavelet-threshold and Grab-Cut methods. Test results have shown increased accuracy and robustness of this segmentation scheme for multi-spectral IR images.
Pre-flight and On-orbit Geometric Calibration of the Lunar Reconnaissance Orbiter Camera
NASA Astrophysics Data System (ADS)
Speyerer, E. J.; Wagner, R. V.; Robinson, M. S.; Licht, A.; Thomas, P. C.; Becker, K.; Anderson, J.; Brylow, S. M.; Humm, D. C.; Tschimmel, M.
2016-04-01
The Lunar Reconnaissance Orbiter Camera (LROC) consists of two imaging systems that provide multispectral and high resolution imaging of the lunar surface. The Wide Angle Camera (WAC) is a seven color push-frame imager with a 90∘ field of view in monochrome mode and 60∘ field of view in color mode. From the nominal 50 km polar orbit, the WAC acquires images with a nadir ground sampling distance of 75 m for each of the five visible bands and 384 m for the two ultraviolet bands. The Narrow Angle Camera (NAC) consists of two identical cameras capable of acquiring images with a ground sampling distance of 0.5 m from an altitude of 50 km. The LROC team geometrically calibrated each camera before launch at Malin Space Science Systems in San Diego, California and the resulting measurements enabled the generation of a detailed camera model for all three cameras. The cameras were mounted and subsequently launched on the Lunar Reconnaissance Orbiter (LRO) on 18 June 2009. Using a subset of the over 793000 NAC and 207000 WAC images of illuminated terrain collected between 30 June 2009 and 15 December 2013, we improved the interior and exterior orientation parameters for each camera, including the addition of a wavelength dependent radial distortion model for the multispectral WAC. These geometric refinements, along with refined ephemeris, enable seamless projections of NAC image pairs with a geodetic accuracy better than 20 meters and sub-pixel precision and accuracy when orthorectifying WAC images.
NASA Astrophysics Data System (ADS)
Deán-Ben, Xosé Luís.; Ermolayev, Vladimir; Mandal, Subhamoy; Ntziachristos, Vasilis; Razansky, Daniel
2016-03-01
Imaging plays an increasingly important role in clinical management and preclinical studies of cancer. Application of optical molecular imaging technologies, in combination with highly specific contrast agent approaches, eminently contributed to understanding of functional and histological properties of tumors and anticancer therapies. Yet, optical imaging exhibits deterioration in spatial resolution and other performance metrics due to light scattering in deep living tissues. High resolution molecular imaging at the whole-organ or whole-body scale may therefore bring additional understanding of vascular networks, blood perfusion and microenvironment gradients of malignancies. In this work, we constructed a volumetric multispectral optoacoustic tomography (vMSOT) scanner for cancer imaging in preclinical models and explored its capacity for real-time 3D intravital imaging of whole breast cancer allografts in mice. Intrinsic tissue properties, such as blood oxygenation gradients, along with the distribution of externally administered liposomes carrying clinically-approved indocyanine green dye (lipo-ICG) were visualized in order to study vascularization, probe penetration and extravasation kinetics in different regions of interest within solid tumors. The use of v-MSOT along with the application of volumetric image analysis and perfusion tracking tools for studies of pathophysiological processes within microenvironment gradients of solid tumors demonstrated superior volumetric imaging system performance with sustained competitive resolution and imaging depth suitable for investigations in preclinical cancer models.
Cartographic potential of SPOT image data
NASA Technical Reports Server (NTRS)
Welch, R.
1985-01-01
In late 1985, the SPOT (Systeme Probatoire d'Observation de la Terre) satellite is to be launched by the Ariane rocket from French Guiana. This satellite will have two High Resolution Visible (HRV) line array sensor systems which are capable of providing monoscopic and stereoscopic coverage of the earth. Cartographic applications are related to the recording of stereo image data and the acquisition of 20-m data in a multispectral mode. One of the objectives of this study involves a comparison of the suitability of SPOT and TM image data for mapping urban land use/cover. Another objective is concerned with a preliminary assessment of the potential of SPOT image data for map revision when merged with conventional map sheets converted to raster formats.
Generalization of the Lyot filter and its application to snapshot spectral imaging.
Gorman, Alistair; Fletcher-Holmes, David William; Harvey, Andrew Robert
2010-03-15
A snapshot multi-spectral imaging technique is described which employs multiple cascaded birefringent interferometers to simultaneously spectrally filter and demultiplex multiple spectral images onto a single detector array. Spectral images are recorded directly without the need for inversion and without rejection of light and so the technique offers the potential for high signal-to-noise ratio. An example of an eight-band multi-spectral movie sequence is presented; we believe this is the first such demonstration of a technique able to record multi-spectral movie sequences without the need for computer reconstruction.
A Multispectral Analysis of the Flamsteed Region of Oceanus Procellarum
NASA Astrophysics Data System (ADS)
Heather, D. J.; Dunkin, S. K.; Spudis, P. D.; Bussey, D. B. J.
1999-01-01
The Flamsteed area of Oceanus Procellarum is representative of basalts that have yet to be sampled. They studied the area in detail using telescopic data to identify seven distinct mare flows. This diversity makes the Flamsteed region an ideal candidate for Clementine multispectral studies. The region studied here is far smaller than that covered, but the higher spatial resolution of the Clementine data will allow us to make a fresh interpretation of the nature of our restricted area before expanding to encompass the surrounding regions. The primary aim of this work is to use Clementine UV-VIS data to analyze flows on a smaller scale and determine the stratigraphy of the mare, using impact craters as probes to measure the thickness of mare lavas wherever possible. We used the Clementine UV-VIS data to produce a multispectral image of the Flamsteed area from 0.60N to 16.06S and 308.34E to 317.12E. The data were processed at a resolution of 200 m/pixel using the ISIS software program (available through the USGS), and the photometric coefficients tabulated. In addition to the multispectral image, a "true color" image, FeO map using the algorithms, and a Ti02 map using the algorithms were generated. In conjunction with a 750-nm Clementine mosaic and Lunar Orbiter photographs, these images formed the dataset used for this analysis. For more details on the data-reduction procedure used, please contact the authors. The area studied here lies in the southeastern portion of Oceanus Procellarum, and covers approximately 134,500 square km, extending from the mare-highland boundary (to the south) up to and including the Flamsteed P ring. The number of spectrally distinct flows in the area is striking in the Clementine mosaics, ranging from high-Ti flows in and around Flamsteed P to low-Ti flows at the edges of the mare-highland boundaries. From a preliminary analysis, we have identified at least five flows in the multispectral image alone. Sunshine and Pieters found three distinct flows within the Flamsteed P ring using high resolution CCD images from a groundbased telescope. We find evidence for only two: a younger high-Ti flow overlying an older lower-Ti flow. However, we have not yet reduced the data for the most eastern part of the ring, and it is possible that further flow(s) could be found in our missing section. The low-Ti flows at some of the mare highland contacts to the south are exceptionally bright in the 750/415 nm channel of the multispectral image. These areas correlate with intermediate FeO and Ti02 content, and seem to be the oldest flows visible on the surface, probably extending over a large area beneath the later flows. Boundaries were defined according to multispectral and albedo properties. Detailed studies of the Ti02 map and maturity data (taken through observations of crater densities from the Orbiter frames and an optical maturity image produced using the algorithm of Lucey et al., will improve this map. Work is continuing in an attempt to delineate clearer flow boundaries. The primary aim of this work is to determine the thickness of the mare flows as one moves out from the highland boundary into Oceanus Procellarum. First-order indications of thickness can be obtained by searching for highland outcrops within the maria. The Flamsteed area shows many such outcrops, and the lavas must be quite thin close to these. A more absolute idea of basalt thickness can be obtained by calculating the depths of craters that have dug through the lavas to expose highland material below. These craters can be identified from multispectral images and 5-point spectra. Previous work has suggested that a cyan color in the multispectral frame represents highland material, and that yellows and greens are freshly excavated basalts. However, we have recently found that a cyan color can also result from a freshly excavated high-Ti basalt. In order to differentiate between the high-Ti and highland signature, it is necessary to look at the FeO and Ti02 frames and plot 5-point spectra to look for the absorption at 0.95 mm that is characteristic of pyroxenes in the basalts. These observations have shown there to be candidate craters in the Flamsteed region which have excavated highland material. The example crater displays a basaltic signature with a clear O.95 micron absorption in its south wall and ejecta, while the absorption in the north wall and ejecta is far weaker. The northern deposits are also relatively low in Ti02 and FeO, and probably represent a mix of basaltic and highland material. The crater is 8 km in diameter, so will have excavated to a depth of about 800mm (using the depth:diameter ratio of 1:10 given by Croft); this is therefore an upper limit to the thickness of the basalts at the crater's northern edge. In addition, there are several areas where craters close together excavate spectrally distinct materials. These may indicate boundaries of subsurface mare flows, and will allow for a more detailed stratigraphic picture to be constructed. We intend to map the lava flow and crater distribution across the Flamsteed region, using craters to deduce depths to the highland-mare contact where possible. Flamsteed will then be combined with adjoining areas of Oceanus Procellarum, gradually developing a complete picture of the stratigraphy and basalt thickness across the basin. This work will form part of a continuing project in which we aim to study maria across the whole Moon, providing a global perspective of lunar volcanic history. Additional information is contained in the original.
2017-01-20
This new, detailed global mosaic color map of Pluto is based on a series of three color filter images obtained by the Ralph/Multispectral Visual Imaging Camera aboard New Horizons during the NASA spacecraft's close flyby of Pluto in July 2015. The mosaic shows how Pluto's large-scale color patterns extend beyond the hemisphere facing New Horizons at closest approach- which were imaged at the highest resolution. North is up; Pluto's equator roughly bisects the band of dark red terrains running across the lower third of the map. Pluto's giant, informally named Sputnik Planitia glacier - the left half of Pluto's signature "heart" feature -- is at the center of this map. http://photojournal.jpl.nasa.gov/catalog/PIA11707
Jung, Jae-Hwang; Jang, Jaeduck; Park, Yongkeun
2013-11-05
We present a novel spectroscopic quantitative phase imaging technique with a wavelength swept-source, referred to as swept-source diffraction phase microscopy (ssDPM), for quantifying the optical dispersion of microscopic individual samples. Employing the swept-source and the principle of common-path interferometry, ssDPM measures the multispectral full-field quantitative phase imaging and spectroscopic microrefractometry of transparent microscopic samples in the visible spectrum with a wavelength range of 450-750 nm and a spectral resolution of less than 8 nm. With unprecedented precision and sensitivity, we demonstrate the quantitative spectroscopic microrefractometry of individual polystyrene beads, 30% bovine serum albumin solution, and healthy human red blood cells.
Development of a Portable 3CCD Camera System for Multispectral Imaging of Biological Samples
Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S.
2014-01-01
Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510
3D tensor-based blind multispectral image decomposition for tumor demarcation
NASA Astrophysics Data System (ADS)
Kopriva, Ivica; Peršin, Antun
2010-03-01
Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).
Single sensor that outputs narrowband multispectral images
Kong, Linghua; Yi, Dingrong; Sprigle, Stephen; Wang, Fengtao; Wang, Chao; Liu, Fuhan; Adibi, Ali; Tummala, Rao
2010-01-01
We report the work of developing a hand-held (or miniaturized), low-cost, stand-alone, real-time-operation, narrow bandwidth multispectral imaging device for the detection of early stage pressure ulcers. PMID:20210418
NASA Astrophysics Data System (ADS)
Lucey, P. G.; Lemelin, M.; Ohtake, M.; Gaddis, L. R.; Greenhagen, B. T.; Yamamoto, S.; Hare, T. M.; Taylor, J.; Martel, L.; Norman, J.
2016-12-01
We combine visible and near-IR multispectral data from the Kaguya Multiband Imager (MI) with thermal infrared multispectral data from the LRO Diviner Lunar Radiometer Experiment to produce global mineral abundance data at 60-m resolution. The base data set applies a radiative transfer mixing model to the Kaguya MI data to produce global maps of plagioclase, low-Ca pyroxene, high-Ca pyroxene and olivine. Diviner thermal multispectral data are highly sensitive to the ratio of plagioclase to mafic minerals and provide independent data to both validate and improve confidence in the derived mineral abundances. The data set is validated using a new set of mineral abundances derived for lunar soils from all lunar sampling sites resolvable using MI data. Modal abundances are derived using X-ray diffraction patterns analyzed with quantitative Rietveldt analysis. Modal abundances were derived from 124 soils from 47 individual Apollo sampling stations. Some individual soil locations within sampling stations can be resolved increasing the total number of resolved locations to 56. With quantitative mineral abundances we can examine the distribution of classically defined lunar rock types in unprecedented detail. In the Feldspathic Highlands Terrane (FHT) the crust is dominated in surface area by noritic anorthosite consistent with a highly mixed composition. Classically defined anorthosite is widespread in the FHT, but much less abundant than the mafic anorthosites. The Procellarum KREEP Terrane and the South Pole Aitken Basin are more noritic than the FHT as previously recognized with abundant norite exposed. While dunite is not found, varieties of troctolitic rocks are widespread in basin rings, especially Crisium, Humorum and Moscoviense, and also occur in the core of the FHT. Only troctolites and anorthosites appear consistently concentrated in basin rings. We have barely scratched the surface of the full resolution data, but have completed an inventory of rock types on basin rings and find in most cases they are dominated by mixed anorthositic rocks similar to the rest of the crust suggesting the rings may be partly mantled by background noritic anorthosite. The major exception is Orientale with its highly anorthositic inner ring.
Low SWaP multispectral sensors using dichroic filter arrays
NASA Astrophysics Data System (ADS)
Dougherty, John; Varghese, Ron
2015-06-01
The benefits of multispectral imaging are well established in a variety of applications including remote sensing, authentication, satellite and aerial surveillance, machine vision, biomedical, and other scientific and industrial uses. However, many of the potential solutions require more compact, robust, and cost-effective cameras to realize these benefits. The next generation of multispectral sensors and cameras needs to deliver improvements in size, weight, power, portability, and spectral band customization to support widespread deployment for a variety of purpose-built aerial, unmanned, and scientific applications. A novel implementation uses micro-patterning of dichroic filters1 into Bayer and custom mosaics, enabling true real-time multispectral imaging with simultaneous multi-band image acquisition. Consistent with color image processing, individual spectral channels are de-mosaiced with each channel providing an image of the field of view. This approach can be implemented across a variety of wavelength ranges and on a variety of detector types including linear, area, silicon, and InGaAs. This dichroic filter array approach can also reduce payloads and increase range for unmanned systems, with the capability to support both handheld and autonomous systems. Recent examples and results of 4 band RGB + NIR dichroic filter arrays in multispectral cameras are discussed. Benefits and tradeoffs of multispectral sensors using dichroic filter arrays are compared with alternative approaches - including their passivity, spectral range, customization options, and scalable production.
NASA Astrophysics Data System (ADS)
Leckie, Donald G.; Cloney, Ed; Joyce, Steve P.
2005-05-01
Jack pine budworm ( Choristoneura pinus pinus (Free.)) is a native insect defoliator of mainly jack pine ( Pinus banksiana Lamb.) in North America east of the Rocky Mountains. Periodic outbreaks of this insect, which generally last two to three years, can cause growth loss and mortality and have an important impact ecologically and economically in terms of timber production and harvest. The jack pine budworm prefers to feed on current year needles. Their characteristic feeding habits cause discolouration or reddening of the canopy. This red colouration is used to map the distribution and intensity of defoliation that has taken place that year (current defoliation). An accurate and consistent map of the distribution and intensity of budworm defoliation (as represented by the red discolouration) at the stand and within stand level is desirable. Automated classification of multispectral imagery, such as is available from airborne and new high resolution satellite systems, was explored as a viable tool for objectively classifying current discolouration. Airborne multispectral imagery was acquired at a 2.5 m resolution with the Multispectral Electro-optical Imaging Sensor (MEIS). It recorded imagery in six nadir looking spectral bands specifically designed to detect discolouration caused by budworm and a near-infrared band viewing forward at 35° was also used. A 2200 nm middle infrared image was acquired with a Daedalus scanner. Training and test areas of different levels of discolouration were created based on field observations and a maximum likelihood supervized classification was used to estimate four classes of discolouration (nil-trace, light, moderate and severe). Good discrimination was achieved with an overall accuracy of 84% for the four discolouration levels. The moderate discolouration class was the poorest at 73%, because of confusion with both the severe and light classes. Accuracy on a stand basis was also good, and regional and within stand discolouration patterns were portrayed well. Only three or four well-placed spectral bands were needed for a good classification. A narrow red band, a near-infrared and short wave infrared band were most useful. A forward looking band did not improve discolouration estimation, but further testing is needed to confirm this result. This method of detecting and classifying discolouration appears to provide a mapping capability useful for conducting jack pine budworm discolouration surveys and integrating this information into decision support systems, forest inventory, growth and yield predictions and the forest management decision-making process.
Geometric Calibration and Validation of Kompsat-3A AEISS-A Camera
Seo, Doocheon; Oh, Jaehong; Lee, Changno; Lee, Donghan; Choi, Haejin
2016-01-01
Kompsat-3A, which was launched on 25 March 2015, is a sister spacecraft of the Kompsat-3 developed by the Korea Aerospace Research Institute (KARI). Kompsat-3A’s AEISS-A (Advanced Electronic Image Scanning System-A) camera is similar to Kompsat-3’s AEISS but it was designed to provide PAN (Panchromatic) resolution of 0.55 m, MS (multispectral) resolution of 2.20 m, and TIR (thermal infrared) at 5.5 m resolution. In this paper we present the geometric calibration and validation work of Kompsat-3A that was completed last year. A set of images over the test sites was taken for two months and was utilized for the work. The workflow includes the boresight calibration, CCDs (charge-coupled devices) alignment and focal length determination, the merge of two CCD lines, and the band-to-band registration. Then, the positional accuracies without any GCPs (ground control points) were validated for hundreds of test sites across the world using various image acquisition modes. In addition, we checked the planimetric accuracy by bundle adjustments with GCPs. PMID:27783054
Photographic techniques for enhancing ERTS MSS data for geologic information
NASA Technical Reports Server (NTRS)
Yost, E.; Geluso, W.; Anderson, R.
1974-01-01
Satellite multispectral black-and-white photographic negatives of Luna County, New Mexico, obtained by ERTS on 15 August and 2 September 1973, were precisely reprocessed into positive images and analyzed in an additive color viewer. In addition, an isoluminous (uniform brightness) color rendition of the image was constructed. The isoluminous technique emphasizes subtle differences between multispectral bands by greatly enhancing the color of the superimposed composite of all bands and eliminating the effects of brightness caused by sloping terrain. Basaltic lava flows were more accurately displayed in the precision processed multispectral additive color ERTS renditions than on existing state geological maps. Malpais lava flows and small basaltic occurrences not appearing on existing geological maps were identified in ERTS multispectral color images.
FRIT characterized hierarchical kernel memory arrangement for multiband palmprint recognition
NASA Astrophysics Data System (ADS)
Kisku, Dakshina R.; Gupta, Phalguni; Sing, Jamuna K.
2015-10-01
In this paper, we present a hierarchical kernel associative memory (H-KAM) based computational model with Finite Ridgelet Transform (FRIT) representation for multispectral palmprint recognition. To characterize a multispectral palmprint image, the Finite Ridgelet Transform is used to achieve a very compact and distinctive representation of linear singularities while it also captures the singularities along lines and edges. The proposed system makes use of Finite Ridgelet Transform to represent multispectral palmprint image and it is then modeled by Kernel Associative Memories. Finally, the recognition scheme is thoroughly tested with a benchmarking multispectral palmprint database CASIA. For recognition purpose a Bayesian classifier is used. The experimental results exhibit robustness of the proposed system under different wavelengths of palm image.
NASA Technical Reports Server (NTRS)
Goetz, A. F. H. (Principal Investigator); Abrams, M. J.; Gillespie, A. R.; Siegal, B. S.; Elston, D. P.; Lucchitta, I.; Wu, S. S. C.; Sanchez, A.; Dipaola, W. D.; Schafer, F. J.
1976-01-01
The author has identified the following significant results. It was found that based on resolution, the Skylab S190A products were superior to LANDSAT images. Based on measurements of shoreline features in Lake Mead S190A images had 1.5 - 3 times greater resolution than LANDSAT. In general, the higher resolution of the Skylab data yielded better discrimination among rock units, but in the case of structural features, lower sun angle LANDSAT images (50 deg) were superior to higher sun angle Skylab images (77 deg). The most valuable advantage of the Skylab over the LANDSAT image products is the capability of producing stereo images. Field spectral reflectance measurements on the Coconino Plateau were made in an effort to determine the best spectral band for discrimination of the six geologic units in question, and these bands were 1.3, 1.2, 1.0, and 0.5 microns. The EREP multispectral scanner yielded data with a low signal to noise ratio which limited its usefulness for image enhancement work. Sites that were studied in Arizona were Shivwits Plateau, Verde Valley, Coconino Plateau, and Red Lake. Thematic maps produced by the three classification algorithms analyzed were not as accurate as the maps produced by photointerpretation of composites of enhanced images.
Optimal color coding for compression of true color images
NASA Astrophysics Data System (ADS)
Musatenko, Yurij S.; Kurashov, Vitalij N.
1998-11-01
In the paper we present the method that improves lossy compression of the true color or other multispectral images. The essence of the method is to project initial color planes into Karhunen-Loeve (KL) basis that gives completely decorrelated representation for the image and to compress basis functions instead of the planes. To do that the new fast algorithm of true KL basis construction with low memory consumption is suggested and our recently proposed scheme for finding optimal losses of Kl functions while compression is used. Compare to standard JPEG compression of the CMYK images the method provides the PSNR gain from 0.2 to 2 dB for the convenient compression ratios. Experimental results are obtained for high resolution CMYK images. It is demonstrated that presented scheme could work on common hardware.
SkySat-1: very high-resolution imagery from a small satellite
NASA Astrophysics Data System (ADS)
Murthy, Kiran; Shearn, Michael; Smiley, Byron D.; Chau, Alexandra H.; Levine, Josh; Robinson, M. Dirk
2014-10-01
This paper presents details of the SkySat-1 mission, which is the first microsatellite-class commercial earth- observation system to generate sub-meter resolution panchromatic imagery, in addition to sub-meter resolution 4-band pan-sharpened imagery. SkySat-1 was built and launched for an order of magnitude lower cost than similarly performing missions. The low-cost design enables the deployment of a large imaging constellation that can provide imagery with both high temporal resolution and high spatial resolution. One key enabler of the SkySat-1 mission was simplifying the spacecraft design and instead relying on ground- based image processing to achieve high-performance at the system level. The imaging instrument consists of a custom-designed high-quality optical telescope and commercially-available high frame rate CMOS image sen- sors. While each individually captured raw image frame shows moderate quality, ground-based image processing algorithms improve the raw data by combining data from multiple frames to boost image signal-to-noise ratio (SNR) and decrease the ground sample distance (GSD) in a process Skybox calls "digital TDI". Careful qual-ity assessment and tuning of the spacecraft, payload, and algorithms was necessary to generate high-quality panchromatic, multispectral, and pan-sharpened imagery. Furthermore, the framing sensor configuration en- abled the first commercial High-Definition full-frame rate panchromatic video to be captured from space, with approximately 1 meter ground sample distance. Details of the SkySat-1 imaging instrument and ground-based image processing system are presented, as well as an overview of the work involved with calibrating and validating the system. Examples of raw and processed imagery are shown, and the raw imagery is compared to pre-launch simulated imagery used to tune the image processing algorithms.
Schwartzkopf, Wade C; Bovik, Alan C; Evans, Brian L
2005-12-01
Traditional chromosome imaging has been limited to grayscale images, but recently a 5-fluorophore combinatorial labeling technique (M-FISH) was developed wherein each class of chromosomes binds with a different combination of fluorophores. This results in a multispectral image, where each class of chromosomes has distinct spectral components. In this paper, we develop new methods for automatic chromosome identification by exploiting the multispectral information in M-FISH chromosome images and by jointly performing chromosome segmentation and classification. We (1) develop a maximum-likelihood hypothesis test that uses multispectral information, together with conventional criteria, to select the best segmentation possibility; (2) use this likelihood function to combine chromosome segmentation and classification into a robust chromosome identification system; and (3) show that the proposed likelihood function can also be used as a reliable indicator of errors in segmentation, errors in classification, and chromosome anomalies, which can be indicators of radiation damage, cancer, and a wide variety of inherited diseases. We show that the proposed multispectral joint segmentation-classification method outperforms past grayscale segmentation methods when decomposing touching chromosomes. We also show that it outperforms past M-FISH classification techniques that do not use segmentation information.
Lossless, Multi-Spectral Data Compressor for Improved Compression for Pushbroom-Type Instruments
NASA Technical Reports Server (NTRS)
Klimesh, Matthew
2008-01-01
A low-complexity lossless algorithm for compression of multispectral data has been developed that takes into account pushbroom-type multispectral imagers properties in order to make the file compression more effective.
Image processing using Gallium Arsenide (GaAs) technology
NASA Technical Reports Server (NTRS)
Miller, Warner H.
1989-01-01
The need to increase the information return from space-borne imaging systems has increased in the past decade. The use of multi-spectral data has resulted in the need for finer spatial resolution and greater spectral coverage. Onboard signal processing will be necessary in order to utilize the available Tracking and Data Relay Satellite System (TDRSS) communication channel at high efficiency. A generally recognized approach to the increased efficiency of channel usage is through data compression techniques. The compression technique implemented is a differential pulse code modulation (DPCM) scheme with a non-uniform quantizer. The need to advance the state-of-the-art of onboard processing was recognized and a GaAs integrated circuit technology was chosen. An Adaptive Programmable Processor (APP) chip set was developed which is based on an 8-bit slice general processor. The reason for choosing the compression technique for the Multi-spectral Linear Array (MLA) instrument is described. Also a description is given of the GaAs integrated circuit chip set which will demonstrate that data compression can be performed onboard in real time at data rate in the order of 500 Mb/s.
Satellite Map of Port-au-Prince, Haiti-2010-Natural Color
Cole, Christopher J.; Sloan, Jeff
2010-01-01
The U.S. Geological Survey produced 1:24,000-scale post-earthquake image base maps incorporating high- and medium-resolution remotely sensed imagery following the 7.0 magnitude earthquake near the capital city of Port au Prince, Haiti, on January 12, 2010. Commercial 2.4-meter multispectral QuickBird imagery was acquired by DigitalGlobe on January 15, 2010, following the initial earthquake. Ten-meter multispectral ALOS AVNIR-2 imagery was collected by the Japanese Space Agency (JAXA) on January 12, 2010. These data were acquired under the Remote Sensing International Charter, a global team of space and satellite agencies that provide timely imagery in support of emergency response efforts worldwide. The images shown on this map were employed to support earthquake response efforts, specifically for use in determining ground deformation, damage assessment, and emergency management decisions. The raw, unprocessed imagery was geo-corrected, mosaicked, and reproduced onto a cartographic 1:24,000-scale base map. These maps are intended to provide a temporally current representation of post-earthquake ground conditions, which may be of use to decision makers and to the general public.
Satellite Map of Port-au-Prince, Haiti-2010-Infrared
Cole, Christopher J.; Sloan, Jeff
2010-01-01
The U.S. Geological Survey produced 1:24,000-scale post-earthquake image base maps incorporating high- and medium-resolution remotely sensed imagery following the 7.0 magnitude earthquake near the capital city of Port au Prince, Haiti, on January 12, 2010. Commercial 2.4-meter multispectral QuickBird imagery was acquired by DigitalGlobe on January 15, 2010, following the initial earthquake. Ten-meter multispectral ALOS AVNIR-2 imagery was collected by the Japanese Space Agency (JAXA) on January 12, 2010. These data were acquired under the Remote Sensing International Charter, a global team of space and satellite agencies that provide timely imagery in support of emergency response efforts worldwide. The images shown on this map were employed to support earthquake response efforts, specifically for use in determining ground deformation, damage assessment, and emergency management decisions. The raw, unprocessed imagery was geo-corrected, mosaicked, and reproduced onto a cartographic 1:24,000-scale base map. These maps are intended to provide a temporally current representation of post-earthquake ground conditions, which may be of use to decision makers and to the general public.
Liu, Wen-Lou; Wang, Lin-Wei; Chen, Jia-Mei; Yuan, Jing-Ping; Xiang, Qing-Ming; Yang, Gui-Fang; Qu, Ai-Ping; Liu, Juan; Li, Yan
2016-04-01
Multispectral imaging (MSI) based on imaging and spectroscopy, as relatively novel to the field of histopathology, has been used in biomedical multidisciplinary researches. We analyzed and compared the utility of multispectral (MS) versus conventional red-green-blue (RGB) images for immunohistochemistry (IHC) staining to explore the advantages of MSI in clinical-pathological diagnosis. The MS images acquired of IHC-stained membranous marker human epidermal growth factor receptor 2 (HER2), cytoplasmic marker cytokeratin5/6 (CK5/6), and nuclear marker estrogen receptor (ER) have higher resolution, stronger contrast, and more accurate segmentation than the RGB images. The total signal optical density (OD) values for each biomarker were higher in MS images than in RGB images (all P < 0.05). Moreover, receiver operator characteristic (ROC) analysis revealed that a greater area under the curve (AUC), higher sensitivity, and specificity in evaluation of HER2 gene were achieved by MS images (AUC = 0.91, 89.1 %, 83.2 %) than RGB images (AUC = 0.87, 84.5, and 81.8 %). There was no significant difference between quantitative results of RGB images and clinico-pathological characteristics (P > 0.05). However, by quantifying MS images, the total signal OD values of HER2 positive expression were correlated with lymph node status and histological grades (P = 0.02 and 0.04). Additionally, the consistency test results indicated the inter-observer agreement was more robust in MS images for HER2 (inter-class correlation coefficient (ICC) = 0.95, r s = 0.94), CK5/6 (ICC = 0.90, r s = 0.88), and ER (ICC = 0.94, r s = 0.94) (all P < 0.001) than that in RGB images for HER2 (ICC = 0.91, r s = 0.89), CK5/6 (ICC = 0.85, r s = 0.84), and ER (ICC = 0.90, r s = 0.89) (all P < 0.001). Our results suggest that the application of MS images in quantitative IHC analysis could obtain higher accuracy, reliability, and more information of protein expression in relation to clinico-pathological characteristics versus conventional RGB images. It may become an optimal IHC digital imaging system used in quantitative pathology.
Selection of optimal multispectral imaging system parameters for small joint arthritis detection
NASA Astrophysics Data System (ADS)
Dolenec, Rok; Laistler, Elmar; Stergar, Jost; Milanic, Matija
2018-02-01
Early detection and treatment of arthritis is essential for a successful outcome of the treatment, but it has proven to be very challenging with existing diagnostic methods. Novel methods based on the optical imaging of the affected joints are becoming an attractive alternative. A non-contact multispectral imaging (MSI) system for imaging of small joints of human hands and feet is being developed. In this work, a numerical simulation of the MSI system is presented. The purpose of the simulation is to determine the optimal design parameters. Inflamed and unaffected human joint models were constructed with a realistic geometry and tissue distributions, based on a MRI scan of a human finger with a spatial resolution of 0.2 mm. The light transport simulation is based on a weighted-photon 3D Monte Carlo method utilizing CUDA GPU acceleration. An uniform illumination of the finger within the 400-1100 nm spectral range was simulated and the photons exiting the joint were recorded using different acceptance angles. From the obtained reflectance and transmittance images the spectral and spatial features most indicative of inflammation were identified. Optimal acceptance angle and spectral bands were determined. This study demonstrates that proper selection of MSI system parameters critically affects ability of a MSI system to discriminate the unaffected and inflamed joints. The presented system design optimization approach could be applied to other pathologies.
NASA Technical Reports Server (NTRS)
Hook, Simon
2011-01-01
The Prototype HyspIRI Thermal Infrared Radiometer (PHyTIR) is being developed as part of the risk reduction activities associated with the Hyperspectral Infrared Imager (HyspIRI). The HyspIRI mission was recommended by the National Research Council Decadal Survey and includes a visible shortwave infrared (SWIR) pushboom spectrometer and a multispectral whiskbroom thermal infrared (TIR) imager. Data from the HyspIRI mission will be used to address key science questions related to the Solid Earth and Carbon Cycle and Ecosystems focus areas of the NASA Science Mission Directorate. The HyspIRI TIR system will have 60m ground resolution, better than 200mK noise equivalent delta temperature (NEDT), 0.5C absolute temperature resolution with a 5-day repeat from LEO orbit. PHyTIR addresses the technology readiness level (TRL) of certain key subsystems of the TIR imager, primarily the detector assembly and scanning mechanism. PHyTIR will use Mercury Cadmium Telluride (MCT) technology at the focal plane and operate in time delay integration mode. A custom read out integrated circuit (ROIC) will provide the high speed readout hence allowing the high data rates needed for the 5 day repeat. PHyTIR will also demonstrate a newly developed interferometeric metrology system. This system will provide an absolute measurement of the scanning mirror to an order of magnitude better than conventional optical encoders. This will minimize the reliance on ground control points hence minimizing post-processing (e.g. geo-rectification computations).
Durning, Laura E.; Sankey, Joel B.; Davis, Philip A.; Sankey, Temuulen T.
2016-12-14
In May 2013, the U.S. Geological Survey’s Grand Canyon Monitoring and Research Center acquired airborne multispectral high-resolution data for the Colorado River in the Grand Canyon, Arizona. The image data, which consist of four color bands (blue, green, red, and near-infrared) with a ground resolution of 20 centimeters, are available to the public as 16-bit geotiff files at http://dx.doi.org/10.5066/F7TX3CHS. The images are projected in the State Plane map projection, using the central Arizona zone (202) and the North American Datum of 1983. The assessed accuracy for these data is based on 91 ground-control points and is reported at the 95-percent confidence level as 0.64 meter (m) and a root mean square error of 0.36 m. The primary intended uses of this dataset are for maps to support field data collection and simple river navigation; high-spatial-resolution change detection of sandbars, other geomorphic landforms, riparian vegetation, and backwater and nearshore habitats; and other ecosystem-wide mapping.
Multispectral Landsat images of Antartica
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucchitta, B.K.; Bowell, J.A.; Edwards, K.L.
1988-01-01
The U.S. Geological Survey has a program to map Antarctica by using colored, digitally enhanced Landsat multispectral scanner images to increase existing map coverage and to improve upon previously published Landsat maps. This report is a compilation of images and image mosaic that covers four complete and two partial 1:250,000-scale quadrangles of the McMurdo Sound region.
USDA-ARS?s Scientific Manuscript database
This research developed a multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet/blue LED excitation for detection of fecal contamination on Golden Delicious apples. Using a hyperspectral line-scan imaging system consisting of an EMCCD camera, spectrograph, an...
Detection of sudden death syndrome using a multispectral imaging sensor
USDA-ARS?s Scientific Manuscript database
Sudden death syndrome (SDS), caused by the fungus Fusarium solani f. sp. glycines, is a widespread mid- to late-season disease with distinctive foliar symptoms. This paper reported the development of an image analysis based method to detect SDS using a multispectral image sensor. A hue, saturation a...
Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R. S.
2016-01-01
The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial‐based THz image sensors, filter‐free nanowire image sensors and nanostructured‐based multispectral image sensors. This novel combination of cutting edge photonics research and well‐developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. PMID:27239941
Radiometric sensitivity comparisons of multispectral imaging systems
NASA Technical Reports Server (NTRS)
Lu, Nadine C.; Slater, Philip N.
1989-01-01
Multispectral imaging systems provide much of the basic data used by the land and ocean civilian remote-sensing community. There are numerous multispectral imaging systems which have been and are being developed. A common way to compare the radiometric performance of these systems is to examine their noise-equivalent change in reflectance, NE Delta-rho. The NE Delta-rho of a system is the reflectance difference that is equal to the noise in the recorded signal. A comparison is made of the noise equivalent change in reflectance of seven different multispectral imaging systems (AVHRR, AVIRIS, ETM, HIRIS, MODIS-N, SPOT-1, HRV, and TM) for a set of three atmospheric conditions (continental aerosol with 23-km visibility, continental aerosol with 5-km visibility, and a Rayleigh atmosphere), five values of ground reflectance (0.01, 0.10, 0.25, 0.50, and 1.00), a nadir viewing angle, and a solar zenith angle of 45 deg.
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Sig-NganLam, Nina; Quattrochi, Dale A.
2004-01-01
The accuracy of traditional multispectral maximum-likelihood image classification is limited by the skewed statistical distributions of reflectances from the complex heterogenous mixture of land cover types in urban areas. This work examines the utility of local variance, fractal dimension and Moran's I index of spatial autocorrelation in segmenting multispectral satellite imagery. Tools available in the Image Characterization and Modeling System (ICAMS) were used to analyze Landsat 7 imagery of Atlanta, Georgia. Although segmentation of panchromatic images is possible using indicators of spatial complexity, different land covers often yield similar values of these indices. Better results are obtained when a surface of local fractal dimension or spatial autocorrelation is combined as an additional layer in a supervised maximum-likelihood multispectral classification. The addition of fractal dimension measures is particularly effective at resolving land cover classes within urbanized areas, as compared to per-pixel spectral classification techniques.
Automatic Near-Real-Time Image Processing Chain for Very High Resolution Optical Satellite Data
NASA Astrophysics Data System (ADS)
Ostir, K.; Cotar, K.; Marsetic, A.; Pehani, P.; Perse, M.; Zaksek, K.; Zaletelj, J.; Rodic, T.
2015-04-01
In response to the increasing need for automatic and fast satellite image processing SPACE-SI has developed and implemented a fully automatic image processing chain STORM that performs all processing steps from sensor-corrected optical images (level 1) to web-delivered map-ready images and products without operator's intervention. Initial development was tailored to high resolution RapidEye images, and all crucial and most challenging parts of the planned full processing chain were developed: module for automatic image orthorectification based on a physical sensor model and supported by the algorithm for automatic detection of ground control points (GCPs); atmospheric correction module, topographic corrections module that combines physical approach with Minnaert method and utilizing anisotropic illumination model; and modules for high level products generation. Various parts of the chain were implemented also for WorldView-2, THEOS, Pleiades, SPOT 6, Landsat 5-8, and PROBA-V. Support of full-frame sensor currently in development by SPACE-SI is in plan. The proposed paper focuses on the adaptation of the STORM processing chain to very high resolution multispectral images. The development concentrated on the sub-module for automatic detection of GCPs. The initially implemented two-step algorithm that worked only with rasterized vector roads and delivered GCPs with sub-pixel accuracy for the RapidEye images, was improved with the introduction of a third step: super-fine positioning of each GCP based on a reference raster chip. The added step exploits the high spatial resolution of the reference raster to improve the final matching results and to achieve pixel accuracy also on very high resolution optical satellite data.
NASA Technical Reports Server (NTRS)
Salomonson, Vincent V.
1999-01-01
In the near term NASA is entering into the peak activity period of the Earth Observing System (EOS). The EOS AM-1 /"Terra" spacecraft is nearing launch and operation to be followed soon by the New Millennium Program (NMP) Earth Observing (EO-1) mission. Other missions related to land imaging and studies include EOS PM-1 mission, the Earth System Sciences Program (ESSP) Vegetation Canopy Lidar (VCL) mission, the EOS/IceSat mission. These missions involve clear advances in technologies and observational capability including improvements in multispectral imaging and other observing strategies, for example, "formation flying". Plans are underway to define the next era of EOS missions, commonly called "EOS Follow-on" or EOS II. The programmatic planning includes concepts that represent advances over the present Landsat-7 mission that concomitantly recognize the advances being made in land imaging within the private sector. The National Polar Orbiting Environmental Satellite Series (NPOESS) Preparatory Project (NPP) is an effort that will help to transition EOS medium resolution (herein meaning spatial resolutions near 500 meters), multispectral measurement capabilities such as represented by the EOS Moderate Resolution Imaging Spectroradiometer (MODIS) into the NPOESS operational series of satellites. Developments in Synthetic Aperture Radar (SAR) and passive microwave land observing capabilities are also proceeding. Beyond these efforts the Earth Science Enterprise Technology Strategy is embarking efforts to advance technologies in several basic areas: instruments, flight systems and operational capability, and information systems. In the case of instruments architectures will be examined that offer significant reductions in mass, volume, power and observational flexibility. For flight systems and operational capability, formation flying including calibration and data fusion, systems operation autonomy, and mechanical and electronic innovations that can reduce spacecraft and subsystem resource requirements. The efforts in information systems will include better approaches for linking multiple data sets, extracting and visualizing information, and improvements in collecting, compressing, transmitting, processing, distributing and archiving data from multiple platforms. Overall concepts such as sensor webs, constellations of observing systems, and rapid and tailored data availability and delivery to multiple users comprise and notions Earth Science Vision for the future.
First CRISM Observations of Mars
NASA Astrophysics Data System (ADS)
Murchie, S.; Arvidson, R.; Bedini, P.; Beisser, K.; Bibring, J.; Bishop, J.; Brown, A.; Boldt, J.; Cavender, P.; Choo, T.; Clancy, R. T.; Darlington, E. H.; Des Marais, D.; Espiritu, R.; Fort, D.; Green, R.; Guinness, E.; Hayes, J.; Hash, C.; Heffernan, K.; Humm, D.; Hutcheson, J.; Izenberg, N.; Lees, J.; Malaret, E.; Martin, T.; McGovern, J. A.; McGuire, P.; Morris, R.; Mustard, J.; Pelkey, S.; Robinson, M.; Roush, T.; Seelos, F.; Seelos, K.; Slavney, S.; Smith, M.; Shyong, W. J.; Strohbehn, K.; Taylor, H.; Wirzburger, M.; Wolff, M.
2006-12-01
CRISM will make its first observations of Mars from MRO in late September 2006, and regular science observations begin in early November. CRISM is a gimbaled, hyperspectral imager whose objectives are (1) to map the entire surface using a subset of bands to characterize crustal mineralogy, (2) to map the mineralogy of key areas at high spectral and spatial resolution, and (3) to measure spatial and seasonal variations in the atmosphere. These objectives are addressed using three major types of observations. In the multispectral survey, with the gimbal pointed at planet nadir, data are collected at a subset of 72 wavelengths covering key mineralogic absorptions, and binned to pixel footprints of 100 or 200 m per pixel. Nearly the entire planet will be mapped in this fashion. In targeted orservations, the gimbal is scanned to remove most along-track motion, and a region of interest is mapped at full spatial and spectral resolution (15-19 m per pixel, 362-3920 nm at 6.55 nm per channel). Ten additional abbreviated, spatially-binned images are taken before and after the main image, providing an emission phase function (EPF) of the site for atmospheric study and correction of surface spectra for atmospheric effects. In atmospheric mode, only the EPF is acquired. Global grids of the resulting lower data volume observations are taken repeatedly throughout the Martian year to measure seasonal variations in atmospheric properties. Raw, calibrated, and map-projected data are delivered to the community with a spectral library to aid in interpretation. CRISM has undergone calibrations during its cruise to Mars using internal sources, including a closed loop controlled integrating sphere that serves as a radiometric reference. On 26 September a protective lens cover will be deployed. First data from Mars will focus on targeted observations of Phoenix and MER, targeted observations of sulfate- and phyllosilicate-containing sites identified by Mars Express per OMEGA, acquisition of initial EPF grids, and multispectral survey of the northern plains. Our presentation will discuss first results from targeted observations and multispectral mapping. Data processing and first analysis of EPFs will be discussed in companion abstracts.
Liu, Jinxia; Cao, Yue; Wang, Qiu; Pan, Wenjuan; Ma, Fei; Liu, Changhong; Chen, Wei; Yang, Jianbo; Zheng, Lei
2016-01-01
Water-injected beef has aroused public concern as a major food-safety issue in meat products. In the study, the potential of multispectral imaging analysis in the visible and near-infrared (405-970 nm) regions was evaluated for identifying water-injected beef. A multispectral vision system was used to acquire images of beef injected with up to 21% content of water, and partial least squares regression (PLSR) algorithm was employed to establish prediction model, leading to quantitative estimations of actual water increase with a correlation coefficient (r) of 0.923. Subsequently, an optimized model was achieved by integrating spectral data with feature information extracted from ordinary RGB data, yielding better predictions (r = 0.946). Moreover, the prediction equation was transferred to each pixel within the images for visualizing the distribution of actual water increase. These results demonstrate the capability of multispectral imaging technology as a rapid and non-destructive tool for the identification of water-injected beef. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Meyer, Hanna; Lehnert, Lukas W.; Wang, Yun; Reudenbach, Christoph; Nauss, Thomas; Bendix, Jörg
2017-03-01
Though the relevance of pasture degradation on the Qinghai-Tibet Plateau (QTP) is widely postulated, its extent is still unknown. Due to the enormous spatial extent, remote sensing provides the only possibility to investigate pasture degradation via frequently used proxies such as vegetation cover and aboveground biomass (AGB). However, unified remote sensing approaches are still lacking. This study tests the applicability of hyper- and multispectral in situ measurements to map vegetation cover and AGB on regional scales. Using machine learning techniques, it is tested whether the full hyperspectral information is needed or if multispectral information is sufficient to accurately estimate pasture degradation proxies. To regionalize pasture degradation proxies, the transferability of the locally derived ML-models to high resolution multispectral satellite data is assessed. 1183 hyperspectral measurements and vegetation records were performed at 18 locations on the QTP. Random Forests models with recursive feature selection were trained to estimate vegetation cover and AGB using narrow-band indices (NBI) as predictors. Separate models were calculated using NBI from hyperspectral data as well as from the same data resampled to WorldView-2, QuickBird and RapidEye channels. The hyperspectral results were compared to the multispectral results. Finally, the models were applied to satellite data to map vegetation cover and AGB on a regional scale. Vegetation cover was accurately predicted by Random Forest if hyperspectral measurements were used (cross validated R2 = 0.89). In contrast, errors in AGB estimations were considerably higher (cross validated R2 = 0.32). Only small differences in accuracy were observed between the models based on hyperspectral compared to multispectral data. The application of the models to satellite images generally resulted in an increase of the estimation error. Though this reflects the challenge of applying in situ measurements to satellite data, the results still show a high potential to map pasture degradation proxies on the QTP. Thus, this study presents robust methodology to remotely detect and monitor pasture degradation at high spatial resolutions.
NASA Astrophysics Data System (ADS)
Voss, M.; Blundell, B.
2015-12-01
Characterization of urban environments is a high priority for the U.S. Army as battlespaces have transitioned from the predominantly open spaces of the 20th century to urban areas where soldiers have reduced situational awareness due to the diversity and density of their surroundings. Creating high-resolution urban terrain geospatial information will improve mission planning and soldier effectiveness. In this effort, super-resolution true-color imagery was collected with an Altivan NOVA unmanned aerial system over the Muscatatuck Urban Training Center near Butlerville, Indiana on September 16, 2014. Multispectral texture analysis using different algorithms was conducted for urban surface characterization at a variety of scales. Training samples extracted from the true-color and texture images. These data were processed using a variety of meta-algorithms with a decision tree classifier to create a high-resolution urban features map. In addition to improving accuracy over traditional image classification methods, this technique allowed the determination of the most significant textural scales in creating urban terrain maps for tactical exploitation.
MSS D Multispectral Scanner System
NASA Technical Reports Server (NTRS)
Lauletta, A. M.; Johnson, R. L.; Brinkman, K. L. (Principal Investigator)
1982-01-01
The development and acceptance testing of the 4-band Multispectral Scanners to be flown on LANDSAT D and LANDSAT D Earth resources satellites are summarized. Emphasis is placed on the acceptance test phase of the program. Test history and acceptance test algorithms are discussed. Trend data of all the key performance parameters are included and discussed separately for each of the two multispectral scanner instruments. Anomalies encountered and their resolutions are included.
Earth remote sensing - 1970-1995
NASA Technical Reports Server (NTRS)
Thome, P. G.
1984-01-01
The past-achievements, current status, and future prospects of the Landsat terrestrial-remote-sensing satellite program are surveyed. Topics examined include the early history of space flight; the development of analysis techniques to interpret the multispectral images obtained by Landsats 1, 2, and 3; the characteristics of the advanced Landsat-4 Thematic Mapper; microwave scanning by Seasat and the Shuttle Imaging Radar; the usefulness of low-resolution AVHRR data from the NOAA satellites; improvements in Landsats 4 and 5 to permit tailoring of information to user needs; expansion and internationalization of the remote-sensing market in the late 1980s; and technological advances in both instrumentation and data-processing predicted by the 1990s.
NASA Astrophysics Data System (ADS)
Šedina, Jaroslav; Pavelka, Karel; Raeva, Paulina
2017-04-01
For ecologically valuable areas monitoring, precise agriculture and forestry, thematic maps or small GIS are needed. Remotely Piloted Aircraft Systems (RPAS) data can be obtained on demand in a short time with cm resolution. Data collection is environmentally friendly and low-cost from an economical point of view. This contribution is focused on using eBee drone for mapping or monitoring national natural reserve which is not opened to public and partly pure inaccessible because its moorland nature. Based on a new equipment (thermal imager, multispectral imager, NIR, NIR red-edge and VIS camera) we started new projects in precise agriculture and forestry.
Image correlation and sampling study
NASA Technical Reports Server (NTRS)
Popp, D. J.; Mccormack, D. S.; Sedwick, J. L.
1972-01-01
The development of analytical approaches for solving image correlation and image sampling of multispectral data is discussed. Relevant multispectral image statistics which are applicable to image correlation and sampling are identified. The general image statistics include intensity mean, variance, amplitude histogram, power spectral density function, and autocorrelation function. The translation problem associated with digital image registration and the analytical means for comparing commonly used correlation techniques are considered. General expressions for determining the reconstruction error for specific image sampling strategies are developed.
Design and analysis of optical systems for the Stanford/MSFC Multi-Spectral Solar Telescope Array
NASA Astrophysics Data System (ADS)
Hadaway, James B.; Johnson, R. Barry; Hoover, Richard B.; Lindblom, Joakim F.; Walker, Arthur B. C., Jr.
1989-07-01
This paper reports on the design and the theoretical ray trace analysis of the optical systems which will comprise the primary imaging components for the Stanford/MSFC Multi-Spectral Solar Telescope Array (MSSTA). This instrument is being developed for ultra-high resolution investigations of the sun from a sounding rocket. Doubly reflecting systems of sphere-sphere, ellipsoid-sphere (Dall-Kirkham), paraboloid-hyperboloid (Cassegrain), and hyperboloid-hyperboloid (Ritchey-Chretien) configurations were analyzed. For these mirror systems, ray trace analysis was performed and through-focus spot diagrams, point spread function plots, and geometrical and diffraction MTFs were generated. The results of these studies are presented along with the parameters of the Ritchey-Chretien optical system selected for the MSSTA flight. The payload, which incorporates seven of these Ritchey-Chretien systems, is now being prepared for launch in late September 1989.
NASA Technical Reports Server (NTRS)
Barrett, E. C.; Grant, C. K. (Principal Investigator)
1977-01-01
The author has identified the following significant results. It was demonstrated that satellites with sufficiently high resolution capability in the visible region of the electromagnetic spectrum could be used to check the accuracy of estimates of total cloud amount assessed subjectively from the ground, and to reveal areas of performance in which corrections should be made. It was also demonstrated that, in middle latitude in summer, cloud shadow may obscure at least half as much again of the land surface covered by an individual LANDSAT frame as the cloud itself. That proportion would increase with latitude and/or time of year towards the winter solstice. Analyses of sample multispectral images for six different categories of clouds in summer revealed marked differences between the reflectance characteristics of cloud fields in the visible/near infrared region of the spectrum.
Design and analysis of optical systems for the Stanford/MSFC Multi-Spectral Solar Telescope Array
NASA Technical Reports Server (NTRS)
Hadaway, James B.; Johnson, R. Barry; Hoover, Richard B.; Lindblom, Joakim F.; Walker, Arthur B. C., Jr.
1989-01-01
This paper reports on the design and the theoretical ray trace analysis of the optical systems which will comprise the primary imaging components for the Stanford/MSFC Multi-Spectral Solar Telescope Array (MSSTA). This instrument is being developed for ultra-high resolution investigations of the sun from a sounding rocket. Doubly reflecting systems of sphere-sphere, ellipsoid-sphere (Dall-Kirkham), paraboloid-hyperboloid (Cassegrain), and hyperboloid-hyperboloid (Ritchey-Chretien) configurations were analyzed. For these mirror systems, ray trace analysis was performed and through-focus spot diagrams, point spread function plots, and geometrical and diffraction MTFs were generated. The results of these studies are presented along with the parameters of the Ritchey-Chretien optical system selected for the MSSTA flight. The payload, which incorporates seven of these Ritchey-Chretien systems, is now being prepared for launch in late September 1989.
COMPARISON OF RETINAL PATHOLOGY VISUALIZATION IN MULTISPECTRAL SCANNING LASER IMAGING.
Meshi, Amit; Lin, Tiezhu; Dans, Kunny; Chen, Kevin C; Amador, Manuel; Hasenstab, Kyle; Muftuoglu, Ilkay Kilic; Nudleman, Eric; Chao, Daniel; Bartsch, Dirk-Uwe; Freeman, William R
2018-03-16
To compare retinal pathology visualization in multispectral scanning laser ophthalmoscope imaging between the Spectralis and Optos devices. This retrospective cross-sectional study included 42 eyes from 30 patients with age-related macular degeneration (19 eyes), diabetic retinopathy (10 eyes), and epiretinal membrane (13 eyes). All patients underwent retinal imaging with a color fundus camera (broad-spectrum white light), the Spectralis HRA-2 system (3-color monochromatic lasers), and the Optos P200 system (2-color monochromatic lasers). The Optos image was cropped to a similar size as the Spectralis image. Seven masked graders marked retinal pathologies in each image within a 5 × 5 grid that included the macula. The average area with detected retinal pathology in all eyes was larger in the Spectralis images compared with Optos images (32.4% larger, P < 0.0001), mainly because of better visualization of epiretinal membrane and retinal hemorrhage. The average detection rate of age-related macular degeneration and diabetic retinopathy pathologies was similar across the three modalities, whereas epiretinal membrane detection rate was significantly higher in the Spectralis images. Spectralis tricolor multispectral scanning laser ophthalmoscope imaging had higher rate of pathology detection primarily because of better epiretinal membrane and retinal hemorrhage visualization compared with Optos bicolor multispectral scanning laser ophthalmoscope imaging.
NASA Astrophysics Data System (ADS)
Chen, Chung-Hao; Yao, Yi; Chang, Hong; Koschan, Andreas; Abidi, Mongi
2013-06-01
Due to increasing security concerns, a complete security system should consist of two major components, a computer-based face-recognition system and a real-time automated video surveillance system. A computerbased face-recognition system can be used in gate access control for identity authentication. In recent studies, multispectral imaging and fusion of multispectral narrow-band images in the visible spectrum have been employed and proven to enhance the recognition performance over conventional broad-band images, especially when the illumination changes. Thus, we present an automated method that specifies the optimal spectral ranges under the given illumination. Experimental results verify the consistent performance of our algorithm via the observation that an identical set of spectral band images is selected under all tested conditions. Our discovery can be practically used for a new customized sensor design associated with given illuminations for an improved face recognition performance over conventional broad-band images. In addition, once a person is authorized to enter a restricted area, we still need to continuously monitor his/her activities for the sake of security. Because pantilt-zoom (PTZ) cameras are capable of covering a panoramic area and maintaining high resolution imagery for real-time behavior understanding, researches in automated surveillance systems with multiple PTZ cameras have become increasingly important. Most existing algorithms require the prior knowledge of intrinsic parameters of the PTZ camera to infer the relative positioning and orientation among multiple PTZ cameras. To overcome this limitation, we propose a novel mapping algorithm that derives the relative positioning and orientation between two PTZ cameras based on a unified polynomial model. This reduces the dependence on the knowledge of intrinsic parameters of PTZ camera and relative positions. Experimental results demonstrate that our proposed algorithm presents substantially reduced computational complexity and improved flexibility at the cost of slightly decreased pixel accuracy as compared to Chen and Wang's method [18].
Tian, Y.Q.; Yu, Q.; Zimmerman, M.J.; Flint, S.; Waldron, M.C.
2010-01-01
This study evaluates the efficacy of remote sensing technology to monitor species composition, areal extent and density of aquatic plants (macrophytes and filamentous algae) in impoundments where their presence may violate water-quality standards. Multispectral satellite (IKONOS) images and more than 500 in situ hyperspectral samples were acquired to map aquatic plant distributions. By analyzing field measurements, we created a library of hyperspectral signatures for a variety of aquatic plant species, associations and densities. We also used three vegetation indices. Normalized Difference Vegetation Index (NDVI), near-infrared (NIR)-Green Angle Index (NGAI) and normalized water absorption depth (DH), at wavelengths 554, 680, 820 and 977 nm to differentiate among aquatic plant species composition, areal density and thickness in cases where hyperspectral analysis yielded potentially ambiguous interpretations. We compared the NDVI derived from IKONOS imagery with the in situ, hyperspectral-derived NDVI. The IKONOS-based images were also compared to data obtained through routine visual observations. Our results confirmed that aquatic species composition alters spectral signatures and affects the accuracy of remote sensing of aquatic plant density. The results also demonstrated that the NGAI has apparent advantages in estimating density over the NDVI and the DH. In the feature space of the three indices, 3D scatter plot analysis revealed that hyperspectral data can differentiate several aquatic plant associations. High-resolution multispectral imagery provided useful information to distinguish among biophysical aquatic plant characteristics. Classification analysis indicated that using satellite imagery to assess Lemna coverage yielded an overall agreement of 79% with visual observations and >90% agreement for the densest aquatic plant coverages. Interpretation of biophysical parameters derived from high-resolution satellite or airborne imagery should prove to be a valuable approach for assessing the effectiveness of management practices for controlling aquatic plant growth in inland waters, as well as for routine monitoring of aquatic plants in lakes and suitable lentic environments. ?? 2010 Blackwell Publishing Ltd.
Development and bench testing of a multi-spectral imaging technology built on a smartphone platform
NASA Astrophysics Data System (ADS)
Bolton, Frank J.; Weiser, Reuven; Kass, Alex J.; Rose, Donny; Safir, Amit; Levitz, David
2016-03-01
Cervical cancer screening presents a great challenge for clinicians across the developing world. In many countries, cervical cancer screening is done by visualization with the naked eye. Simple brightfield white light imaging with photo documentation has been shown to make a significant impact on cervical cancer care. Adoption of smartphone based cervical imaging devices is increasing across Africa. However, advanced imaging technologies such as multispectral imaging systems, are seldom deployed in low resource settings, where they are needed most. To address this challenge, the optical system of a smartphone-based mobile colposcopy imaging system was refined, integrating components required for low cost, portable multi-spectral imaging of the cervix. This paper describes the refinement of the mobile colposcope to enable it to acquire images of the cervix at multiple illumination wavelengths, including modeling and laboratory testing. Wavelengths were selected to enable quantifying the main absorbers in tissue (oxyand deoxy-hemoglobin, and water), as well as scattering parameters that describe the size distribution of scatterers. The necessary hardware and software modifications are reviewed. Initial testing suggests the multi-spectral mobile device holds promise for use in low-resource settings.
NASA Technical Reports Server (NTRS)
Lauer, D. T. (Principal Investigator)
1984-01-01
The optimum index factor package was used to choose TM band for color compositing. Processing techniques were also used on TM data over several sites to: (1) reduce the amount of data that needs to be processed and analyzed by using statistical methods or by combining full-resolution products with spatially compressed products; (2) digitally process small subareas to improve the visual appearance of large-scale products or to merge different-resolution image data; and (3) evaluate and compare the information content of the different three-band combinations that can be made using the TM data. Results indicate that for some applications the added spectral information over MSS is even more important than the TM's increased spatial resolution.
USDA-ARS?s Scientific Manuscript database
Structured-illumination reflectance imaging (SIRI) is a new, promising imaging modality for enhancing quality detection of food. A liquid-crystal tunable filter (LCTF)-based multispectral SIRI system was developed and used for selecting optimal wavebands to detect bruising in apples. Immediately aft...
NASA Astrophysics Data System (ADS)
Colaninno, Nicola; Marambio Castillo, Alejandro; Roca Cladera, Josep
2017-10-01
The demand for remotely sensed data is growing increasingly, due to the possibility of managing information about huge geographic areas, in digital format, at different time periods, and suitable for analysis in GIS platforms. However, primary satellite information is not such immediate as desirable. Beside geometric and atmospheric limitations, clouds, cloud shadows, and haze generally contaminate optical images. In terms of land cover, such a contamination is intended as missing information and should be replaced. Generally, image reconstruction is classified according to three main approaches, i.e. in-painting-based, multispectral-based, and multitemporal-based methods. This work relies on a multitemporal-based approach to retrieve uncontaminated pixels for an image scene. We explore an automatic method for quickly getting daytime cloudless and shadow-free image at moderate spatial resolution for large geographical areas. The process expects two main steps: a multitemporal effect adjustment to avoid significant seasonal variations, and a data reconstruction phase, based on automatic selection of uncontaminated pixels from an image stack. The result is a composite image based on middle values of the stack, over a year. The assumption is that, for specific purposes, land cover changes at a coarse scale are not significant over relatively short time periods. Because it is largely recognized that satellite imagery along tropical areas are generally strongly affected by clouds, the methodology is tested for the case study of the Dominican Republic at the year 2015; while Landsat 8 imagery are employed to test the approach.
The NASA 2003 Mars Exploration Rover Panoramic Camera (Pancam) Investigation
NASA Astrophysics Data System (ADS)
Bell, J. F.; Squyres, S. W.; Herkenhoff, K. E.; Maki, J.; Schwochert, M.; Morris, R. V.; Athena Team
2002-12-01
The Panoramic Camera System (Pancam) is part of the Athena science payload to be launched to Mars in 2003 on NASA's twin Mars Exploration Rover missions. The Pancam imaging system on each rover consists of two major components: a pair of digital CCD cameras, and the Pancam Mast Assembly (PMA), which provides the azimuth and elevation actuation for the cameras as well as a 1.5 meter high vantage point from which to image. Pancam is a multispectral, stereoscopic, panoramic imaging system, with a field of regard provided by the PMA that extends across 360o of azimuth and from zenith to nadir, providing a complete view of the scene around the rover. Pancam utilizes two 1024x2048 Mitel frame transfer CCD detector arrays, each having a 1024x1024 active imaging area and 32 optional additional reference pixels per row for offset monitoring. Each array is combined with optics and a small filter wheel to become one "eye" of a multispectral, stereoscopic imaging system. The optics for both cameras consist of identical 3-element symmetrical lenses with an effective focal length of 42 mm and a focal ratio of f/20, yielding an IFOV of 0.28 mrad/pixel or a rectangular FOV of 16o\\x9D 16o per eye. The two eyes are separated by 30 cm horizontally and have a 1o toe-in to provide adequate parallax for stereo imaging. The cameras are boresighted with adjacent wide-field stereo Navigation Cameras, as well as with the Mini-TES instrument. The Pancam optical design is optimized for best focus at 3 meters range, and allows Pancam to maintain acceptable focus from infinity to within 1.5 meters of the rover, with a graceful degradation (defocus) at closer ranges. Each eye also contains a small 8-position filter wheel to allow multispectral sky imaging, direct Sun imaging, and surface mineralogic studies in the 400-1100 nm wavelength region. Pancam has been designed and calibrated to operate within specifications from -55oC to +5oC. An onboard calibration target and fiducial marks provide the ability to validate the radiometric and geometric calibration on Mars. Pancam relies heavily on use of the JPL ICER wavelet compression algorithm to maximize data return within stringent mission downlink limits. The scientific goals of the Pancam investigation are to: (a) obtain monoscopic and stereoscopic image mosaics to assess the morphology, topography, and geologic context of each MER landing site; (b) obtain multispectral visible to short-wave near-IR images of selected regions to determine surface color and mineralogic properties; (c) obtain multispectral images over a range of viewing geometries to constrain surface photometric and physical properties; and (d) obtain images of the Martian sky, including direct images of the Sun, to determine dust and aerosol opacity and physical properties. In addition, Pancam also serves a variety of operational functions on the MER mission, including (e) serving as the primary Sun-finding camera for rover navigation; (f) resolving objects on the scale of the rover wheels to distances of ~100 m to help guide navigation decisions; (g) providing stereo coverage adequate for the generation of digital terrain models to help guide and refine rover traverse decisions; (h) providing high resolution images and other context information to guide the selection of the most interesting in situ sampling targets; and (i) supporting acquisition and release of exciting E/PO products.
Recent Results from the Mars Exploration Rover Opportunity Pancam Instruments
NASA Astrophysics Data System (ADS)
Bell, James F., III; Arvidson, Raymond; Farrand, William; Johnson, Jeffrey; Rice, James; Rice, Melissa; Ruff, Steven; Squyres, Steven; Wang, Alian
2013-04-01
The Mars Exploration Rover (MER) Panoramic Camera (Pancam) instruments [1] are multispectral, stereoscopic CCD cameras that have acquired high resolution color images from the Spirit rover field site in Gusev crater and the Opportunity rover field site in Meridiani Planum. Spirit's mission ended in March 2010 after 2209 sols of operation and acquisition of more than 81,000 Pancam images. Opportunity's mission is ongoing, now spanning more than 3180 sols of operation as of early January 2013. As of this writing, the Opportunity Pancam instruments have acquired more than 106,000 images. Approximately 21% of those images have been acquired as part of 11-color multispectral "image cubes" used to characterize the color properties of the surface and atmosphere at wavelengths between 432 and 1009 nm. Most of the remainder of the imaging part of the rovers' downlink (which is the vast majority of the overall downlink) has been used for monochrome or limited-filter tactical imaging of targets of interest, stereo Navcam or Hazcam imaging in support of rover driving and/or rover arm instrument chemical, mineralogical, or Microscopic Imager measurements, photometric experiments, atmospheric dynamics and aerosol observations, and even occasional astronomical observations like solar transits of Phobos and Deimos. Less than 2% of the downlinked bits have been used for calibration observations (bias, dark current, flatfield, calibration target) over the course of the mission. During the past Mars year, Opportunity arrived at Cape York, a northwestern segment of the rim of 22 km diameter Endeavour crater, and has been used to characterize the geology, geochemistry, and mineralogy of this ancient Noachian terrain. Pancam multispectral images have provided important data with which to help identify basaltic impact breccias within the crater rim materials, as well as gypsum-rich veins within the Meridiani plains sedimentary rocks adjacent to the rim. The continuing study of light-toned veins and fracture fills in this region includes an assessment of the hydration state of these materials using the longest-wavelength Pancam filters, which sample a weak H2O and/or OH absorption in some hydrated minerals (such as hydrated sulfates) [2]. Multispectral imaging observations are also helping to constrain the distribution and origin of discontinuous dark coatings on many light toned outcrop rocks at Matijevic Hill, near the southern end of Cape York. These outcrop rocks have been hypothesized [3] to be the unit containing the Fe/Mg smectite phyllosilicates deposits identified in Cape York from MRO/CRISM orbital observations. In this presentation I will discuss the major observations and scientific results in Meridiani that have been derived from or enabled by Pancam imaging observations, as well as provide an update on the most recent rover imaging and other results from Cape York in particular. Lessons learned in terms of the design, performance, remote operation, and analysis of multispectral CCD imaging observations from the Martian surface will also be discussed. [1] J.F. Bell III et al. (2003) JGR, v108, E12; J.F. Bell III et al. (2006) JGR, v111, E02S03. [2] M.S. Rice et al. (2013) this meeting; M.S. Rice et al. (2010) Icarus, v205, 375. [3] S.W. Squyres et al. (2013) LPSC 44th; R.E. Arvidson et al. (2013) LPSC 44th.
Lattice algebra approach to multispectral analysis of ancient documents.
Valdiviezo-N, Juan C; Urcid, Gonzalo
2013-02-01
This paper introduces a lattice algebra procedure that can be used for the multispectral analysis of historical documents and artworks. Assuming the presence of linearly mixed spectral pixels captured in a multispectral scene, the proposed method computes the scaled min- and max-lattice associative memories to determine the purest pixels that best represent the spectra of single pigments. The estimation of fractional proportions of pure spectra at each image pixel is used to build pigment abundance maps that can be used for subsequent restoration of damaged parts. Application examples include multispectral images acquired from the Archimedes Palimpsest and a Mexican pre-Hispanic codex.