Sample records for spectral imaging sensor

  1. Integrated Spectral Low Noise Image Sensor with Nanowire Polarization Filters for Low Contrast Imaging

    DTIC Science & Technology

    2015-11-05

    AFRL-AFOSR-VA-TR-2015-0359 Integrated Spectral Low Noise Image Sensor with Nanowire Polarization Filters for Low Contrast Imaging Viktor Gruev...To) 02/15/2011 - 08/15/2015 4. TITLE AND SUBTITLE Integrated Spectral Low Noise Image Sensor with Nanowire Polarization Filters for Low Contrast...investigate alternative spectral imaging architectures based on my previous experience in this research area. I will develop nanowire polarization

  2. Resolution Enhancement of Hyperion Hyperspectral Data using Ikonos Multispectral Data

    DTIC Science & Technology

    2007-09-01

    spatial - resolution hyperspectral image to produce a sharpened product. The result is a product that has the spectral properties of the ...multispectral sensors. In this work, we examine the benefits of combining data from high- spatial - resolution , low- spectral - resolution spectral imaging...sensors with data obtained from high- spectral - resolution , low- spatial - resolution spectral imaging sensors.

  3. Nanophotonic Image Sensors

    PubMed Central

    Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R. S.

    2016-01-01

    The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial‐based THz image sensors, filter‐free nanowire image sensors and nanostructured‐based multispectral image sensors. This novel combination of cutting edge photonics research and well‐developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. PMID:27239941

  4. Nanophotonic Image Sensors.

    PubMed

    Chen, Qin; Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R S

    2016-09-01

    The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial-based THz image sensors, filter-free nanowire image sensors and nanostructured-based multispectral image sensors. This novel combination of cutting edge photonics research and well-developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Absolute Radiometric Calibration of Narrow-Swath Imaging Sensors with Reference to Non-Coincident Wide-Swath Sensors

    NASA Technical Reports Server (NTRS)

    McCorkel, Joel; Thome, Kurtis; Lockwood, Ronald

    2012-01-01

    An inter-calibration method is developed to provide absolute radiometric calibration of narrow-swath imaging sensors with reference to non-coincident wide-swath sensors. The method predicts at-sensor radiance using non-coincident imagery from the reference sensor and knowledge of spectral reflectance of the test site. The imagery of the reference sensor is restricted to acquisitions that provide similar view and solar illumination geometry to reduce uncertainties due to directional reflectance effects. Spectral reflectance of the test site is found with a simple iterative radiative transfer method using radiance values of a well-understood wide-swath sensor and spectral shape information based on historical ground-based measurements. At-sensor radiance is calculated for the narrow-swath sensor using this spectral reflectance and atmospheric parameters that are also based on historical in situ measurements. Results of the inter-calibration method show agreement on the 2 5 percent level in most spectral regions with the vicarious calibration technique relying on coincident ground-based measurements referred to as the reflectance-based approach. While the variability of the inter-calibration method based on non-coincident image pairs is significantly larger, results are consistent with techniques relying on in situ measurements. The method is also insensitive to spectral differences between the sensors by transferring to surface spectral reflectance prior to prediction of at-sensor radiance. The utility of this inter-calibration method is made clear by its flexibility to utilize image pairings with acquisition dates differing in excess of 30 days allowing frequent absolute calibration comparisons between wide- and narrow-swath sensors.

  6. Fusion of spectral and panchromatic images using false color mapping and wavelet integrated approach

    NASA Astrophysics Data System (ADS)

    Zhao, Yongqiang; Pan, Quan; Zhang, Hongcai

    2006-01-01

    With the development of sensory technology, new image sensors have been introduced that provide a greater range of information to users. But as the power limitation of radiation, there will always be some trade-off between spatial and spectral resolution in the image captured by specific sensors. Images with high spatial resolution can locate objects with high accuracy, whereas images with high spectral resolution can be used to identify the materials. Many applications in remote sensing require fusing low-resolution imaging spectral images with panchromatic images to identify materials at high resolution in clutter. A pixel-based false color mapping and wavelet transform integrated fusion algorithm is presented in this paper, the resulting images have a higher information content than each of the original images and retain sensor-specific image information. The simulation results show that this algorithm can enhance the visibility of certain details and preserve the difference of different materials.

  7. Radiometric Comparison between Sentinel 2A (S2A) Multispectral Imager (MSI) and Landsat 8 (L8) Operational Land Imager (OLI)

    NASA Astrophysics Data System (ADS)

    Micijevic, E.; Haque, M. O.

    2016-12-01

    With its forty-four year continuous data record, the Landsat image archive provides an invaluable source of information for essential climate variables, global land change studies and a variety of other applications. The latest in the series, Landsat 8, carries the Operational Land Imager (OLI), the sensor with an improved design compared to its predecessors, but with similar radiometric, spatial and spectral characteristics, to provide image data continuity. Sentinel 2A (S2A), launched in June 2015, carries the Multispectral Imager (MSI) that has a number of bands with spectral and radiometric characteristics similar to L8 OLI. As such, it offers an opportunity to augment the Landsat data record through increased frequency of acquisitions, when combined with OLI. In this study, we compared Top-of-Atmosphere (TOA) reflectance of matching spectral bands in MSI and OLI products. Comparison between S2A MSI and L8 OLI sensors was performed using image data acquired near simultaneously primarily over Pseudo Invariant Calibration Site (PICS) Libya 4, but also over other calibration test sites. Spectral differences between the two sensors were accounted for using their spectral filter profiles and a spectral signature of the site derived from EO1 Hyperion hyperspectral imagery. Temporal stability was also assessed through temporal trending of Top-of-Atmosphere (TOA) reflectance measured by the two sensors over PICS. The performed analysis suggests good agreement between the two sensors, within 5% for the costal aerosol band and better than 3% for other matching bands. It is important to note that whenever data from different sensors are used together in a study, the special attention need to be paid to the spectral band differences between the sensors because the necessary spectral difference adjustment is target dependent and may vary a lot from target to target.

  8. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.

    PubMed

    Park, Chulhee; Kang, Moon Gi

    2016-05-18

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.

  9. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition

    PubMed Central

    Park, Chulhee; Kang, Moon Gi

    2016-01-01

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381

  10. A review of potential image fusion methods for remote sensing-based irrigation management: Part II

    USDA-ARS?s Scientific Manuscript database

    Satellite-based sensors provide data at either greater spectral and coarser spatial resolutions, or lower spectral and finer spatial resolutions due to complementary spectral and spatial characteristics of optical sensor systems. In order to overcome this limitation, image fusion has been suggested ...

  11. A Digital Sensor Simulator of the Pushbroom Offner Hyperspectral Imaging Spectrometer

    PubMed Central

    Tao, Dongxing; Jia, Guorui; Yuan, Yan; Zhao, Huijie

    2014-01-01

    Sensor simulators can be used in forecasting the imaging quality of a new hyperspectral imaging spectrometer, and generating simulated data for the development and validation of the data processing algorithms. This paper presents a novel digital sensor simulator for the pushbroom Offner hyperspectral imaging spectrometer, which is widely used in the hyperspectral remote sensing. Based on the imaging process, the sensor simulator consists of a spatial response module, a spectral response module, and a radiometric response module. In order to enhance the simulation accuracy, spatial interpolation-resampling, which is implemented before the spatial degradation, is developed to compromise the direction error and the extra aliasing effect. Instead of using the spectral response function (SRF), the dispersive imaging characteristics of the Offner convex grating optical system is accurately modeled by its configuration parameters. The non-uniformity characteristics, such as keystone and smile effects, are simulated in the corresponding modules. In this work, the spatial, spectral and radiometric calibration processes are simulated to provide the parameters of modulation transfer function (MTF), SRF and radiometric calibration parameters of the sensor simulator. Some uncertainty factors (the stability, band width of the monochromator for the spectral calibration, and the integrating sphere uncertainty for the radiometric calibration) are considered in the simulation of the calibration process. With the calibration parameters, several experiments were designed to validate the spatial, spectral and radiometric response of the sensor simulator, respectively. The experiment results indicate that the sensor simulator is valid. PMID:25615727

  12. Wavelet compression techniques for hyperspectral data

    NASA Technical Reports Server (NTRS)

    Evans, Bruce; Ringer, Brian; Yeates, Mathew

    1994-01-01

    Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet transform coder was used for the two-dimensional compression. The third case used a three dimensional extension of this same algorithm.

  13. The fusion of satellite and UAV data: simulation of high spatial resolution band

    NASA Astrophysics Data System (ADS)

    Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata

    2017-10-01

    Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.

  14. Utility of BRDF Models for Estimating Optimal View Angles in Classification of Remotely Sensed Images

    NASA Technical Reports Server (NTRS)

    Valdez, P. F.; Donohoe, G. W.

    1997-01-01

    Statistical classification of remotely sensed images attempts to discriminate between surface cover types on the basis of the spectral response recorded by a sensor. It is well known that surfaces reflect incident radiation as a function of wavelength producing a spectral signature specific to the material under investigation. Multispectral and hyperspectral sensors sample the spectral response over tens and even hundreds of wavelength bands to capture the variation of spectral response with wavelength. Classification algorithms then exploit these differences in spectral response to distinguish between materials of interest. Sensors of this type, however, collect detailed spectral information from one direction (usually nadir); consequently, do not consider the directional nature of reflectance potentially detectable at different sensor view angles. Improvements in sensor technology have resulted in remote sensing platforms capable of detecting reflected energy across wavelengths (spectral signatures) and from multiple view angles (angular signatures) in the fore and aft directions. Sensors of this type include: the moderate resolution imaging spectroradiometer (MODIS), the multiangle imaging spectroradiometer (MISR), and the airborne solid-state array spectroradiometer (ASAS). A goal of this paper, then, is to explore the utility of Bidirectional Reflectance Distribution Function (BRDF) models in the selection of optimal view angles for the classification of remotely sensed images by employing a strategy of searching for the maximum difference between surface BRDFs. After a brief discussion of directional reflect ante in Section 2, attention is directed to the Beard-Maxwell BRDF model and its use in predicting the bidirectional reflectance of a surface. The selection of optimal viewing angles is addressed in Section 3, followed by conclusions and future work in Section 4.

  15. Fixed Pattern Noise pixel-wise linear correction for crime scene imaging CMOS sensor

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Messinger, David W.; Dube, Roger R.; Ientilucci, Emmett J.

    2017-05-01

    Filtered multispectral imaging technique might be a potential method for crime scene documentation and evidence detection due to its abundant spectral information as well as non-contact and non-destructive nature. Low-cost and portable multispectral crime scene imaging device would be highly useful and efficient. The second generation crime scene imaging system uses CMOS imaging sensor to capture spatial scene and bandpass Interference Filters (IFs) to capture spectral information. Unfortunately CMOS sensors suffer from severe spatial non-uniformity compared to CCD sensors and the major cause is Fixed Pattern Noise (FPN). IFs suffer from "blue shift" effect and introduce spatial-spectral correlated errors. Therefore, Fixed Pattern Noise (FPN) correction is critical to enhance crime scene image quality and is also helpful for spatial-spectral noise de-correlation. In this paper, a pixel-wise linear radiance to Digital Count (DC) conversion model is constructed for crime scene imaging CMOS sensor. Pixel-wise conversion gain Gi,j and Dark Signal Non-Uniformity (DSNU) Zi,j are calculated. Also, conversion gain is divided into four components: FPN row component, FPN column component, defects component and effective photo response signal component. Conversion gain is then corrected to average FPN column and row components and defects component so that the sensor conversion gain is uniform. Based on corrected conversion gain and estimated image incident radiance from the reverse of pixel-wise linear radiance to DC model, corrected image spatial uniformity can be enhanced to 7 times as raw image, and the bigger the image DC value within its dynamic range, the better the enhancement.

  16. Detection of spectral line curvature in imaging spectrometer data

    NASA Astrophysics Data System (ADS)

    Neville, Robert A.; Sun, Lixin; Staenz, Karl

    2003-09-01

    A procedure has been developed to measure the band-centers and bandwidths for imaging spectrometers using data acquired by the sensor in flight. This is done for each across-track pixel, thus allowing the measurement of the instrument's slit curvature or spectral 'smile'. The procedure uses spectral features present in the at-sensor radiance which are common to all pixels in the scene. These are principally atmospheric absorption lines. The band-center and bandwidth determinations are made by correlating the sensor measured radiance with a modelled radiance, the latter calculated using MODTRAN 4.2. Measurements have been made for a number of instruments including Airborne Visible and Infra-Red Imaging Spectrometer (AVIRIS), SWIR Full Spectrum Imager (SFSI), and Hyperion. The measurements on AVIRIS data were performed as a test of the procedure; since AVIRIS is a whisk-broom scanner it is expected to be free of spectral smile. SFSI is an airborne pushbroom instrument with considerable spectral smile. Hyperion is a satellite pushbroom sensor with a relatively small degree of smile. Measurements of Hyperion were made using three different data sets to check for temporal variations.

  17. Hyperspectral CMOS imager

    NASA Astrophysics Data System (ADS)

    Jerram, P. A.; Fryer, M.; Pratlong, J.; Pike, A.; Walker, A.; Dierickx, B.; Dupont, B.; Defernez, A.

    2017-11-01

    CCDs have been used for many years for Hyperspectral imaging missions and have been extremely successful. These include the Medium Resolution Imaging Spectrometer (MERIS) [1] on Envisat, the Compact High Resolution Imaging Spectrometer (CHRIS) on Proba and the Ozone Monitoring Instrument operating in the UV spectral region. ESA are also planning a number of further missions that are likely to use CCD technology (Sentinel 3, 4 and 5). However CMOS sensors have a number of advantages which means that they will probably be used for hyperspectral applications in the longer term. There are two main advantages with CMOS sensors: First a hyperspectral image consists of spectral lines with a large difference in intensity; in a frame transfer CCD the faint spectral lines have to be transferred through the part of the imager illuminated by intense lines. This can lead to cross-talk and whilst this problem can be reduced by the use of split frame transfer and faster line rates CMOS sensors do not require a frame transfer and hence inherently will not suffer from this problem. Second, with a CMOS sensor the intense spectral lines can be read multiple times within a frame to give a significant increase in dynamic range. We will describe the design, and initial test of a CMOS sensor for use in hyperspectral applications. This device has been designed to give as high a dynamic range as possible with minimum cross-talk. The sensor has been manufactured on high resistivity epitaxial silicon wafers and is be back-thinned and left relatively thick in order to obtain the maximum quantum efficiency across the entire spectral range

  18. Characterization of a compact 6-band multifunctional camera based on patterned spectral filters in the focal plane

    NASA Astrophysics Data System (ADS)

    Torkildsen, H. E.; Hovland, H.; Opsahl, T.; Haavardsholm, T. V.; Nicolas, S.; Skauli, T.

    2014-06-01

    In some applications of multi- or hyperspectral imaging, it is important to have a compact sensor. The most compact spectral imaging sensors are based on spectral filtering in the focal plane. For hyperspectral imaging, it has been proposed to use a "linearly variable" bandpass filter in the focal plane, combined with scanning of the field of view. As the image of a given object in the scene moves across the field of view, it is observed through parts of the filter with varying center wavelength, and a complete spectrum can be assembled. However if the radiance received from the object varies with viewing angle, or with time, then the reconstructed spectrum will be distorted. We describe a camera design where this hyperspectral functionality is traded for multispectral imaging with better spectral integrity. Spectral distortion is minimized by using a patterned filter with 6 bands arranged close together, so that a scene object is seen by each spectral band in rapid succession and with minimal change in viewing angle. The set of 6 bands is repeated 4 times so that the spectral data can be checked for internal consistency. Still the total extent of the filter in the scan direction is small. Therefore the remainder of the image sensor can be used for conventional imaging with potential for using motion tracking and 3D reconstruction to support the spectral imaging function. We show detailed characterization of the point spread function of the camera, demonstrating the importance of such characterization as a basis for image reconstruction. A simplified image reconstruction based on feature-based image coregistration is shown to yield reasonable results. Elimination of spectral artifacts due to scene motion is demonstrated.

  19. Applications of spectral band adjustment factors (SBAF) for cross-calibration

    USGS Publications Warehouse

    Chander, Gyanesh

    2013-01-01

    To monitor land surface processes over a wide range of temporal and spatial scales, it is critical to have coordinated observations of the Earth's surface acquired from multiple spaceborne imaging sensors. However, an integrated global observation framework requires an understanding of how land surface processes are seen differently by various sensors. This is particularly true for sensors acquiring data in spectral bands whose relative spectral responses (RSRs) are not similar and thus may produce different results while observing the same target. The intrinsic offsets between two sensors caused by RSR mismatches can be compensated by using a spectral band adjustment factor (SBAF), which takes into account the spectral profile of the target and the RSR of the two sensors. The motivation of this work comes from the need to compensate the spectral response differences of multispectral sensors in order to provide a more accurate cross-calibration between the sensors. In this paper, radiometric cross-calibration of the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and the Terra Moderate Resolution Imaging Spectroradiometer (MODIS) sensors was performed using near-simultaneous observations over the Libya 4 pseudoinvariant calibration site in the visible and near-infrared spectral range. The RSR differences of the analogous ETM+ and MODIS spectral bands provide the opportunity to explore, understand, quantify, and compensate for the measurement differences between these two sensors. The cross-calibration was initially performed by comparing the top-of-atmosphere (TOA) reflectances between the two sensors over their lifetimes. The average percent differences in the long-term trends ranged from $-$5% to $+$6%. The RSR compensated ETM+ TOA reflectance (ETM+$^{ast}$) measurements were then found to agree with MODIS TOA reflectance to within 5% for all bands when Earth Observing-1 Hy- erion hyperspectral data were used to produce the SBAFs. These differences were later reduced to within 1% for all bands (except band 2) by using Environmental Satellite Scanning Imaging Absorption Spectrometer for Atmospheric Cartography hyperspectral data to produce the SBAFs.

  20. Hyperspectral imaging simulation of object under sea-sky background

    NASA Astrophysics Data System (ADS)

    Wang, Biao; Lin, Jia-xuan; Gao, Wei; Yue, Hui

    2016-10-01

    Remote sensing image simulation plays an important role in spaceborne/airborne load demonstration and algorithm development. Hyperspectral imaging is valuable in marine monitoring, search and rescue. On the demand of spectral imaging of objects under the complex sea scene, physics based simulation method of spectral image of object under sea scene is proposed. On the development of an imaging simulation model considering object, background, atmosphere conditions, sensor, it is able to examine the influence of wind speed, atmosphere conditions and other environment factors change on spectral image quality under complex sea scene. Firstly, the sea scattering model is established based on the Philips sea spectral model, the rough surface scattering theory and the water volume scattering characteristics. The measured bi directional reflectance distribution function (BRDF) data of objects is fit to the statistical model. MODTRAN software is used to obtain solar illumination on the sea, sky brightness, the atmosphere transmittance from sea to sensor and atmosphere backscattered radiance, and Monte Carlo ray tracing method is used to calculate the sea surface object composite scattering and spectral image. Finally, the object spectrum is acquired by the space transformation, radiation degradation and adding the noise. The model connects the spectrum image with the environmental parameters, the object parameters, and the sensor parameters, which provide a tool for the load demonstration and algorithm development.

  1. On-Orbit Calibration of a Multi-Spectral Satellite Satellite Sensor Using a High Altitude Airborne Imaging Spectrometer

    NASA Technical Reports Server (NTRS)

    Green, R. O.; Shimada, M.

    1996-01-01

    Earth-looking satellites must be calibrated in order to quantitatively measure and monitor components of land, water and atmosphere of the Earth system. The inevitable change in performance due to the stress of satellite launch requires that the calibration of a satellite sensor be established and validated on-orbit. A new approach to on-orbit satellite sensor calibration has been developed using the flight of a high altitude calibrated airborne imaging spectrometer below a multi-spectral satellite sensor.

  2. Multispectral Imaging in Cultural Heritage Conservation

    NASA Astrophysics Data System (ADS)

    Del Pozo, S.; Rodríguez-Gonzálvez, P.; Sánchez-Aparicio, L. J.; Muñoz-Nieto, A.; Hernández-López, D.; Felipe-García, B.; González-Aguilera, D.

    2017-08-01

    This paper sums up the main contribution derived from the thesis entitled "Multispectral imaging for the analysis of materials and pathologies in civil engineering, constructions and natural spaces" awarded by CIPA-ICOMOS for its connection with the preservation of Cultural Heritage. This thesis is framed within close-range remote sensing approaches by the fusion of sensors operating in the optical domain (visible to shortwave infrared spectrum). In the field of heritage preservation, multispectral imaging is a suitable technique due to its non-destructive nature and its versatility. It combines imaging and spectroscopy to analyse materials and land covers and enables the use of a variety of different geomatic sensors for this purpose. These sensors collect both spatial and spectral information for a given scenario and a specific spectral range, so that, their smaller storage units save the spectral properties of the radiation reflected by the surface of interest. The main goal of this research work is to characterise different construction materials as well as the main pathologies of Cultural Heritage elements by combining active and passive sensors recording data in different ranges. Conclusions about the suitability of each type of sensor and spectral range are drawn in relation to each particular case study and damage. It should be emphasised that results are not limited to images, since 3D intensity data from laser scanners can be integrated with 2D data from passive sensors obtaining high quality products due to the added value that metric brings to multispectral images.

  3. A Note on the Temporary Misregistration of Landsat-8 Operational Land Imager (OLI) and Sentinel-2 Multi Spectral Instrument (MSI) Imagery

    NASA Technical Reports Server (NTRS)

    Storey, James; Roy, David P.; Masek, Jeffrey; Gascon, Ferran; Dwyer, John; Choate, Michael

    2016-01-01

    The Landsat-8 and Sentinel-2 sensors provide multi-spectral image data with similar spectral and spatial characteristics that together provide improved temporal coverage globally. Both systems are designed to register Level 1 products to a reference image framework, however, the Landsat-8 framework, based upon the Global Land Survey images, contains residual geolocation errors leading to an expected sensor-to-sensor misregistration of 38 m (2sigma). These misalignments vary geographically but should be stable for a given area. The Landsat framework will be readjusted for consistency with the Sentinel-2 Global Reference Image, with completion expected in 2018. In the interim, users can measure Landsat-to-Sentinel tie points to quantify the misalignment in their area of interest and if appropriate to reproject the data to better alignment.

  4. A note on the temporary misregistration of Landsat-8 Operational Land Imager (OLI) and Sentinel-2 Multi Spectral Instrument (MSI) imagery

    USGS Publications Warehouse

    Storey, James C.; Roy, David P.; Masek, Jeffrey; Gascon, Ferran; Dwyer, John L.; Choate, Michael J.

    2016-01-01

    The Landsat-8 and Sentinel-2 sensors provide multi-spectral image data with similar spectral and spatial characteristics that together provide improved temporal coverage globally. Both systems are designed to register Level 1 products to a reference image framework, however, the Landsat-8 framework, based upon the Global Land Survey images, contains residual geolocation errors leading to an expected sensor-to-sensor misregistration of 38 m (2σ). These misalignments vary geographically but should be stable for a given area. The Landsat framework will be readjusted for consistency with the Sentinel-2 Global Reference Image, with completion expected in 2018. In the interim, users can measure Landsat-to-Sentinel tie points to quantify the misalignment in their area of interest and if appropriate to reproject the data to better alignment.

  5. Spectral Reconstruction for Obtaining Virtual Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    Perez, G. J. P.; Castro, E. C.

    2016-12-01

    Hyperspectral sensors demonstrated its capabalities in identifying materials and detecting processes in a satellite scene. However, availability of hyperspectral images are limited due to the high development cost of these sensors. Currently, most of the readily available data are from multi-spectral instruments. Spectral reconstruction is an alternative method to address the need for hyperspectral information. The spectral reconstruction technique has been shown to provide a quick and accurate detection of defects in an integrated circuit, recovers damaged parts of frescoes, and it also aids in converting a microscope into an imaging spectrometer. By using several spectral bands together with a spectral library, a spectrum acquired by a sensor can be expressed as a linear superposition of elementary signals. In this study, spectral reconstruction is used to estimate the spectra of different surfaces imaged by Landsat 8. Four atmospherically corrected surface reflectance from three visible bands (499 nm, 585 nm, 670 nm) and one near-infrared band (872 nm) of Landsat 8, and a spectral library of ground elements acquired from the United States Geological Survey (USGS) are used. The spectral library is limited to 420-1020 nm spectral range, and is interpolated at one nanometer resolution. Singular Value Decomposition (SVD) is used to calculate the basis spectra, which are then applied to reconstruct the spectrum. The spectral reconstruction is applied for test cases within the library consisting of vegetation communities. This technique was successful in reconstructing a hyperspectral signal with error of less than 12% for most of the test cases. Hence, this study demonstrated the potential of simulating information at any desired wavelength, creating a virtual hyperspectral sensor without the need for additional satellite bands.

  6. Hyperspectral Imaging Sensors and the Marine Coastal Zone

    NASA Technical Reports Server (NTRS)

    Richardson, Laurie L.

    2000-01-01

    Hyperspectral imaging sensors greatly expand the potential of remote sensing to assess, map, and monitor marine coastal zones. Each pixel in a hyperspectral image contains an entire spectrum of information. As a result, hyperspectral image data can be processed in two very different ways: by image classification techniques, to produce mapped outputs of features in the image on a regional scale; and by use of spectral analysis of the spectral data embedded within each pixel of the image. The latter is particularly useful in marine coastal zones because of the spectral complexity of suspended as well as benthic features found in these environments. Spectral-based analysis of hyperspectral (AVIRIS) imagery was carried out to investigate a marine coastal zone of South Florida, USA. Florida Bay is a phytoplankton-rich estuary characterized by taxonomically distinct phytoplankton assemblages and extensive seagrass beds. End-member spectra were extracted from AVIRIS image data corresponding to ground-truth sample stations and well-known field sites. Spectral libraries were constructed from the AVIRIS end-member spectra and used to classify images using the Spectral Angle Mapper (SAM) algorithm, a spectral-based approach that compares the spectrum, in each pixel of an image with each spectrum in a spectral library. Using this approach different phytoplankton assemblages containing diatoms, cyanobacteria, and green microalgae, as well as benthic community (seagrasses), were mapped.

  7. A novel digital image sensor with row wise gain compensation for Hyper Spectral Imager (HySI) application

    NASA Astrophysics Data System (ADS)

    Lin, Shengmin; Lin, Chi-Pin; Wang, Weng-Lyang; Hsiao, Feng-Ke; Sikora, Robert

    2009-08-01

    A 256x512 element digital image sensor has been developed which has a large pixel size, slow scan and low power consumption for Hyper Spectral Imager (HySI) applications. The device is a mixed mode, silicon on chip (SOC) IC. It combines analog circuitry, digital circuitry and optical sensor circuitry into a single chip. This chip integrates a 256x512 active pixel sensor array, a programming gain amplifier (PGA) for row wise gain setting, I2C interface, SRAM, 12 bit analog to digital convertor (ADC), voltage regulator, low voltage differential signal (LVDS) and timing generator. The device can be used for 256 pixels of spatial resolution and 512 bands of spectral resolution ranged from 400 nm to 950 nm in wavelength. In row wise gain readout mode, one can set a different gain on each row of the photo detector by storing the gain setting data on the SRAM thru the I2C interface. This unique row wise gain setting can be used to compensate the silicon spectral response non-uniformity problem. Due to this unique function, the device is suitable for hyper-spectral imager applications. The HySI camera located on-board the Chandrayaan-1 satellite, was successfully launched to the moon on Oct. 22, 2008. The device is currently mapping the moon and sending back excellent images of the moon surface. The device design and the moon image data will be presented in the paper.

  8. Automatic parquet block sorting using real-time spectral classification

    NASA Astrophysics Data System (ADS)

    Astrom, Anders; Astrand, Erik; Johansson, Magnus

    1999-03-01

    This paper presents a real-time spectral classification system based on the PGP spectrograph and a smart image sensor. The PGP is a spectrograph which extracts the spectral information from a scene and projects the information on an image sensor, which is a method often referred to as Imaging Spectroscopy. The classification is based on linear models and categorizes a number of pixels along a line. Previous systems adopting this method have used standard sensors, which often resulted in poor performance. The new system, however, is based on a patented near-sensor classification method, which exploits analogue features on the smart image sensor. The method reduces the enormous amount of data to be processed at an early stage, thus making true real-time spectral classification possible. The system has been evaluated on hardwood parquet boards showing very good results. The color defects considered in the experiments were blue stain, white sapwood, yellow decay and red decay. In addition to these four defect classes, a reference class was used to indicate correct surface color. The system calculates a statistical measure for each parquet block, giving the pixel defect percentage. The patented method makes it possible to run at very high speeds with a high spectral discrimination ability. Using a powerful illuminator, the system can run with a line frequency exceeding 2000 line/s. This opens up the possibility to maintain high production speed and still measure with good resolution.

  9. Vision communications based on LED array and imaging sensor

    NASA Astrophysics Data System (ADS)

    Yoo, Jong-Ho; Jung, Sung-Yoon

    2012-11-01

    In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.

  10. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  11. Miniature infrared hyperspectral imaging sensor for airborne applications

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Hinnrichs, Bradford; McCutchen, Earl

    2017-05-01

    Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera, both MWIR and LWIR, small enough to serve as a payload on a miniature unmanned aerial vehicles. The optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of the sensor. This new and innovative approach to infrared hyperspectral imaging spectrometer uses micro-optics and will be explained in this paper. The micro-optics are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a mini-UAV or commercial quadcopter. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the spatial resolution. A 2 x 2 lenslet array will image four different spectral images of the scene each frame and when coupled with a 512 x 512 focal plane array will give spatial resolution of 256 x 256 pixel each spectral image. Another system that we developed uses a 4 x 4 lenslet array on a 1024 x 1024 pixel element focal plane array which gives 16 spectral images of 256 x 256 pixel resolution each frame.

  12. Ultrasonic imaging of seismic physical models using a fringe visibility enhanced fiber-optic Fabry-Perot interferometric sensor.

    PubMed

    Zhang, Wenlu; Chen, Fengyi; Ma, Wenwen; Rong, Qiangzhou; Qiao, Xueguang; Wang, Ruohui

    2018-04-16

    A fringe visibility enhanced fiber-optic Fabry-Perot interferometer based ultrasonic sensor is proposed and experimentally demonstrated for seismic physical model imaging. The sensor consists of a graded index multimode fiber collimator and a PTFE (polytetrafluoroethylene) diaphragm to form a Fabry-Perot interferometer. Owing to the increase of the sensor's spectral sideband slope and the smaller Young's modulus of the PTFE diaphragm, a high response to both continuous and pulsed ultrasound with a high SNR of 42.92 dB in 300 kHz is achieved when the spectral sideband filter technique is used to interrogate the sensor. The ultrasonic reconstructed images can clearly differentiate the shape of models with a high resolution.

  13. [Cross comparison of ASTER and Landsat ETM+ multispectral measurements for NDVI and SAVI vegetation indices].

    PubMed

    Xu, Han-qiu; Zhang, Tie-jun

    2011-07-01

    The present paper investigates the quantitative relationship between the NDVI and SAVI vegetation indices of Landsat and ASTER sensors based on three tandem image pairs. The study examines how well ASTER sensor vegetation observations replicate ETM+ vegetation observations, and more importantly, the difference in the vegetation observations between the two sensors. The DN values of the three image pairs were first converted to at-sensor reflectance to reduce radiometric differences between two sensors, images. The NDVI and SAVI vegetation indices of the two sensors were then calculated using the converted reflectance. The quantitative relationship was revealed through regression analysis on the scatter plots of the vegetation index values of the two sensors. The models for the conversion between the two sensors, vegetation indices were also obtained from the regression. The results show that the difference does exist between the two sensors, vegetation indices though they have a very strong positive linear relationship. The study found that the red and near infrared measurements differ between the two sensors, with ASTER generally producing higher reflectance in the red band and lower reflectance in the near infrared band than the ETM+ sensor. This results in the ASTER sensor producing lower spectral vegetation index measurements, for the same target, than ETM+. The relative spectral response function differences in the red and near infrared bands between the two sensors are believed to be the main factor contributing to their differences in vegetation index measurements, because the red and near infrared relative spectral response features of the ASTER sensor overlap the vegetation "red edge" spectral region. The obtained conversion models have high accuracy with a RMSE less than 0.04 for both sensors' inter-conversion between corresponding vegetation indices.

  14. Toward More Accurate Iris Recognition Using Cross-Spectral Matching.

    PubMed

    Nalla, Pattabhi Ramaiah; Kumar, Ajay

    2017-01-01

    Iris recognition systems are increasingly deployed for large-scale applications such as national ID programs, which continue to acquire millions of iris images to establish identity among billions. However, with the availability of variety of iris sensors that are deployed for the iris imaging under different illumination/environment, significant performance degradation is expected while matching such iris images acquired under two different domains (either sensor-specific or wavelength-specific). This paper develops a domain adaptation framework to address this problem and introduces a new algorithm using Markov random fields model to significantly improve cross-domain iris recognition. The proposed domain adaptation framework based on the naive Bayes nearest neighbor classification uses a real-valued feature representation, which is capable of learning domain knowledge. Our approach to estimate corresponding visible iris patterns from the synthesis of iris patches in the near infrared iris images achieves outperforming results for the cross-spectral iris recognition. In this paper, a new class of bi-spectral iris recognition system that can simultaneously acquire visible and near infra-red images with pixel-to-pixel correspondences is proposed and evaluated. This paper presents experimental results from three publicly available databases; PolyU cross-spectral iris image database, IIITD CLI and UND database, and achieve outperforming results for the cross-sensor and cross-spectral iris matching.

  15. Black light - How sensors filter spectral variation of the illuminant

    NASA Technical Reports Server (NTRS)

    Brainard, David H.; Wandell, Brian A.; Cowan, William B.

    1989-01-01

    Visual sensor responses may be used to classify objects on the basis of their surface reflectance functions. In a color image, the image data are represented as a vector of sensor responses at each point in the image. This vector depends both on the surface reflectance functions and on the spectral power distribution of the ambient illumination. Algorithms designed to classify objects on the basis of their surface reflectance functions typically attempt to overcome the dependence of the sensor responses on the illuminant by integrating sensor data collected from multiple surfaces. In machine vision applications, it is shown that it is often possible to design the sensor spectral responsivities so that the vector direction of the sensor responses does not depend upon the illuminant. The conditions under which this is possible are given and an illustrative calculation is performed. In biological systems, where the sensor responsivities are fixed, it is shown that some changes in the illumination cause no change in the sensor responses. Such changes in illuminant are called black illuminants. It is possible to express any illuminant as the sum of two unique components. One component is a black illuminant. The second component is called the visible component. The visible component of an illuminant completely characterizes the effect of the illuminant on the vector of sensor responses.

  16. Onboard spectral imager data processor

    NASA Astrophysics Data System (ADS)

    Otten, Leonard J.; Meigs, Andrew D.; Franklin, Abraham J.; Sears, Robert D.; Robison, Mark W.; Rafert, J. Bruce; Fronterhouse, Donald C.; Grotbeck, Ronald L.

    1999-10-01

    Previous papers have described the concept behind the MightySat II.1 program, the satellite's Fourier Transform imaging spectrometer's optical design, the design for the spectral imaging payload, and its initial qualification testing. This paper discusses the on board data processing designed to reduce the amount of downloaded data by an order of magnitude and provide a demonstration of a smart spaceborne spectral imaging sensor. Two custom components, a spectral imager interface 6U VME card that moves data at over 30 MByte/sec, and four TI C-40 processors mounted to a second 6U VME and daughter card, are used to adapt the sensor to the spacecraft and provide the necessary high speed processing. A system architecture that offers both on board real time image processing and high-speed post data collection analysis of the spectral data has been developed. In addition to the on board processing of the raw data into a usable spectral data volume, one feature extraction technique has been incorporated. This algorithm operates on the basic interferometric data. The algorithm is integrated within the data compression process to search for uploadable feature descriptions.

  17. Effects of Spectral Band Differences between Landsat 8 Operational Land Imager (OLI) and Sentinel 2A Multispectral Instrument (MSI)

    NASA Astrophysics Data System (ADS)

    Micijevic, E.; Haque, M. O.

    2015-12-01

    In satellite remote sensing, Landsat sensors are recognized for providing well calibrated satellite images for over four decades. This image data set provides an important contribution to detection and temporal analysis of land changes. Landsat 8 (L8), the latest satellite of the Landsat series, was designed to continue its legacy as well as to embrace advanced technology and satisfy the demand of the broader scientific community. Sentinel 2A (S2A), a European satellite launched in June 2015, is designed to keep data continuity of Landsat and SPOT like satellites. The S2A MSI sensor is equipped with spectral bands similar to L8 OLI and includes some additional ones. Compared to L8 OLI, green and near infrared MSI bands have narrower bandwidths, whereas coastal-aerosol (CA) and cirrus have larger bandwidths. The blue and red MSI bands cover higher wavelengths than the matching OLI bands. Although the spectral band differences are not large, their combination with the spectral signature of a studied target can largely affect the Top Of Atmosphere (TOA) reflectance seen by the sensors. This study investigates the effect of spectral band differences between S2A MSI and L8 OLI sensors. The differences in spectral bands between sensors can be assessed by calculating Spectral Band Adjustment Factors (SBAF). For radiometric calibration purposes, the SBAFs for the calibration test site are used to bring the two sensors to the same radiometric scale. However, the SBAFs are target dependent and different sensors calibrated to the same radiometric scale will (correctly!) measure different reflectance for the same target. Thus, when multiple sensors are used to study a given target, the sensor responses need to be adjusted using SBAFs specific to that target. Comparison of the SBAFs for S2A MSI and L8 OLI based on various vegetation spectral profiles revealed variations in radiometric responses as high as 15%. Depending on target under study, these differences could be even higher.

  18. Few-photon color imaging using energy-dispersive superconducting transition-edge sensor spectrometry

    NASA Astrophysics Data System (ADS)

    Niwa, Kazuki; Numata, Takayuki; Hattori, Kaori; Fukuda, Daiji

    2017-04-01

    Highly sensitive spectral imaging is increasingly being demanded in bioanalysis research and industry to obtain the maximum information possible from molecules of different colors. We introduce an application of the superconducting transition-edge sensor (TES) technique to highly sensitive spectral imaging. A TES is an energy-dispersive photodetector that can distinguish the wavelength of each incident photon. Its effective spectral range is from the visible to the infrared (IR), up to 2800 nm, which is beyond the capabilities of other photodetectors. TES was employed in this study in a fiber-coupled optical scanning microscopy system, and a test sample of a three-color ink pattern was observed. A red-green-blue (RGB) image and a near-IR image were successfully obtained in the few-incident-photon regime, whereas only a black and white image could be obtained using a photomultiplier tube. Spectral data were also obtained from a selected focal area out of the entire image. The results of this study show that TES is feasible for use as an energy-dispersive photon-counting detector in spectral imaging applications.

  19. Few-photon color imaging using energy-dispersive superconducting transition-edge sensor spectrometry.

    PubMed

    Niwa, Kazuki; Numata, Takayuki; Hattori, Kaori; Fukuda, Daiji

    2017-04-04

    Highly sensitive spectral imaging is increasingly being demanded in bioanalysis research and industry to obtain the maximum information possible from molecules of different colors. We introduce an application of the superconducting transition-edge sensor (TES) technique to highly sensitive spectral imaging. A TES is an energy-dispersive photodetector that can distinguish the wavelength of each incident photon. Its effective spectral range is from the visible to the infrared (IR), up to 2800 nm, which is beyond the capabilities of other photodetectors. TES was employed in this study in a fiber-coupled optical scanning microscopy system, and a test sample of a three-color ink pattern was observed. A red-green-blue (RGB) image and a near-IR image were successfully obtained in the few-incident-photon regime, whereas only a black and white image could be obtained using a photomultiplier tube. Spectral data were also obtained from a selected focal area out of the entire image. The results of this study show that TES is feasible for use as an energy-dispersive photon-counting detector in spectral imaging applications.

  20. Multi-Image Registration for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2002-01-01

    An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.

  1. [Nitrogen stress measurement of canola based on multi-spectral charged coupled device imaging sensor].

    PubMed

    Feng, Lei; Fang, Hui; Zhou, Wei-Jun; Huang, Min; He, Yong

    2006-09-01

    Site-specific variable nitrogen application is one of the major precision crop production management operations. Obtaining sufficient crop nitrogen stress information is essential for achieving effective site-specific nitrogen applications. The present paper describes the development of a multi-spectral nitrogen deficiency sensor, which uses three channels (green, red, near-infrared) of crop images to determine the nitrogen level of canola. This sensor assesses the nitrogen stress by means of estimated SPAD value of the canola based on canola canopy reflectance sensed using three channels (green, red, near-infrared) of the multi-spectral camera. The core of this investigation is the calibration methods between the multi-spectral references and the nitrogen levels in crops measured using a SPAD 502 chlorophyll meter. Based on the results obtained from this study, it can be concluded that a multi-spectral CCD camera can provide sufficient information to perform reasonable SPAD values estimation during field operations.

  2. Wavelength-Scanning SPR Imaging Sensors Based on an Acousto-Optic Tunable Filter and a White Light Laser

    PubMed Central

    Zeng, Youjun; Wang, Lei; Wu, Shu-Yuen; He, Jianan; Qu, Junle; Li, Xuejin; Ho, Ho-Pui; Gu, Dayong; Gao, Bruce Zhi; Shao, Yonghong

    2017-01-01

    A fast surface plasmon resonance (SPR) imaging biosensor system based on wavelength interrogation using an acousto-optic tunable filter (AOTF) and a white light laser is presented. The system combines the merits of a wide-dynamic detection range and high sensitivity offered by the spectral approach with multiplexed high-throughput data collection and a two-dimensional (2D) biosensor array. The key feature is the use of AOTF to realize wavelength scan from a white laser source and thus to achieve fast tracking of the SPR dip movement caused by target molecules binding to the sensor surface. Experimental results show that the system is capable of completing a SPR dip measurement within 0.35 s. To the best of our knowledge, this is the fastest time ever reported in the literature for imaging spectral interrogation. Based on a spectral window with a width of approximately 100 nm, a dynamic detection range and resolution of 4.63 × 10−2 refractive index unit (RIU) and 1.27 × 10−6 RIU achieved in a 2D-array sensor is reported here. The spectral SPR imaging sensor scheme has the capability of performing fast high-throughput detection of biomolecular interactions from 2D sensor arrays. The design has no mechanical moving parts, thus making the scheme completely solid-state. PMID:28067766

  3. Snapshot Imaging Spectrometry in the Visible and Long Wave Infrared

    NASA Astrophysics Data System (ADS)

    Maione, Bryan David

    Imaging spectrometry is an optical technique in which the spectral content of an object is measured at each location in space. The main advantage of this modality is that it enables characterization beyond what is possible with a conventional camera, since spectral information is generally related to the chemical composition of the object. Due to this, imaging spectrometers are often capable of detecting targets that are either morphologically inconsistent, or even under resolved. A specific class of imaging spectrometer, known as a snapshot system, seeks to measure all spatial and spectral information simultaneously, thereby rectifying artifacts associated with scanning designs, and enabling the measurement of temporally dynamic scenes. Snapshot designs are the focus of this dissertation. Three designs for snapshot imaging spectrometers are developed, each providing novel contributions to the field of imaging spectrometry. In chapter 2, the first spatially heterodyned snapshot imaging spectrometer is modeled and experimentally validated. Spatial heterodyning is a technique commonly implemented in non-imaging Fourier transform spectrometry. For Fourier transform imaging spectrometers, spatial heterodyning improves the spectral resolution trade space. Additionally, in this chapter a unique neural network based spectral calibration is developed and determined to be an improvement beyond Fourier and linear operator based techniques. Leveraging spatial heterodyning as developed in chapter 2, in chapter 3, a high spectral resolution snapshot Fourier transform imaging spectrometer, based on a Savart plate interferometer, is developed and experimentally validated. The sensor presented in this chapter is the highest spectral resolution sensor in its class. High spectral resolution enables the sensor to discriminate narrowly spaced spectral lines. The capabilities of neural networks in imaging spectrometry are further explored in this chapter. Neural networks are used to perform single target detection on raw instrument data, thereby eliminating the need for an explicit spectral calibration step. As an extension of the results in chapter 2, neural networks are once again demonstrated to be an improvement when compared to linear operator based detection. In chapter 4 a non-interferometric design is developed for the long wave infrared (wavelengths spanning 8-12 microns). The imaging spectrometer developed in this chapter is a multi-aperture filtered microbolometer. Since the detector is uncooled, the presented design is ultra-compact and low power. Additionally, cost effective polymer absorption filters are used in lieu of interference filters. Since, each measurement of the system is spectrally multiplexed, an SNR advantage is realized. A theoretical model for the filtered design is developed, and the performance of the sensor for detecting liquid contaminants is investigated. Similar to past chapters, neural networks are used and achieve false detection rates of less than 1%. Lastly, this dissertation is concluded with a discussion on future work and potential impact of these devices.

  4. Image science team

    NASA Technical Reports Server (NTRS)

    Ando, K.

    1982-01-01

    A substantial technology base of solid state pushbroom sensors exists and is in the process of further evolution at both GSFC and JPL. Technologies being developed relate to short wave infrared (SWIR) detector arrays; HgCdTe hybrid detector arrays; InSb linear and area arrays; passive coolers; spectral beam splitters; the deposition of spectral filters on detector arrays; and the functional design of the shuttle/space platform imaging spectrometer (SIS) system. Spatial and spectral characteristics of field, aircraft and space multispectral sensors are summaried. The status, field of view, and resolution of foreign land observing systems are included.

  5. Multi-spectral imaging with infrared sensitive organic light emitting diode

    PubMed Central

    Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky

    2014-01-01

    Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions. PMID:25091589

  6. Multi-spectral imaging with infrared sensitive organic light emitting diode

    NASA Astrophysics Data System (ADS)

    Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky

    2014-08-01

    Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions.

  7. HIRIS (High-Resolution Imaging Spectrometer: Science opportunities for the 1990s. Earth observing system. Volume 2C: Instrument panel report

    NASA Technical Reports Server (NTRS)

    1987-01-01

    The high-resolution imaging spectrometer (HIRIS) is an Earth Observing System (EOS) sensor developed for high spatial and spectral resolution. It can acquire more information in the 0.4 to 2.5 micrometer spectral region than any other sensor yet envisioned. Its capability for critical sampling at high spatial resolution makes it an ideal complement to the MODIS (moderate-resolution imaging spectrometer) and HMMR (high-resolution multifrequency microwave radiometer), lower resolution sensors designed for repetitive coverage. With HIRIS it is possible to observe transient processes in a multistage remote sensing strategy for Earth observations on a global scale. The objectives, science requirements, and current sensor design of the HIRIS are discussed along with the synergism of the sensor with other EOS instruments and data handling and processing requirements.

  8. Cross-calibration of the Terra MODIS, Landsat 7 ETM+ and EO-1 ALI sensors using near-simultaneous surface observation over the Railroad Valley Playa, Nevada, test site

    USGS Publications Warehouse

    Chander, G.; Angal, A.; Choi, T.; Meyer, D.J.; Xiong, X.; Teillet, P.M.

    2007-01-01

    A cross-calibration methodology has been developed using coincident image pairs from the Terra Moderate Resolution Imaging Spectroradiometer (MODIS), the Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) and the Earth Observing EO-1 Advanced Land Imager (ALI) to verify the absolute radiometric calibration accuracy of these sensors with respect to each other. To quantify the effects due to different spectral responses, the Relative Spectral Responses (RSR) of these sensors were studied and compared by developing a set of "figures-of-merit." Seven cloud-free scenes collected over the Railroad Valley Playa, Nevada (RVPN), test site were used to conduct the cross-calibration study. This cross-calibration approach was based on image statistics from near-simultaneous observations made by different satellite sensors. Homogeneous regions of interest (ROI) were selected in the image pairs, and the mean target statistics were converted to absolute units of at-sensor reflectance. Using these reflectances, a set of cross-calibration equations were developed giving a relative gain and bias between the sensor pair.

  9. The Intercalibration of Geostationary Visible Imagers Using Operational Hyperspectral SCIAMACHY Radiances

    NASA Technical Reports Server (NTRS)

    Doelling, David R.; Scarino, Benjamin R.; Morstad, Daniel; Gopalan, Arun; Bhatt, Rajendra; Lukashin, Constantine; Minnis, Patrick

    2013-01-01

    Spectral band differences between sensors can complicate the process of intercalibration of a visible sensor against a reference sensor. This can be best addressed by using a hyperspectral reference sensor whenever possible because they can be used to accurately mitigate the band differences. This paper demonstrates the feasibility of using operational Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY) large-footprint hyperspectral radiances to calibrate geostationary Earth-observing (GEO) sensors. Near simultaneous nadir overpass measurements were used to compare the temporal calibration of SCIAMACHY with Aqua Moderate Resolution Imaging Spectroradiometer band radiances, which were found to be consistent to within 0.44% over seven years. An operational SCIAMACHY/GEO ray-matching technique was presented, along with enhancements to improve radiance pair sampling. These enhancements did not bias the underlying intercalibration and provided enough sampling to allow up to monthly monitoring of the GEO sensor degradation. The results of the SCIAMACHY/GEO intercalibration were compared with other operational four-year Meteosat-9 0.65-µm calibration coefficients and were found to be within 1% of the gain, and more importantly, it had one of the lowest temporal standard errors of all the methods. This is more than likely that the GEO spectral response function could be directly applied to the SCIAMACHY radiances, whereas the other operational methods inferred a spectral correction factor. This method allows the validation of the spectral corrections required by other methods.

  10. Computational multispectral video imaging [Invited].

    PubMed

    Wang, Peng; Menon, Rajesh

    2018-01-01

    Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.

  11. IMAGING SPECTROSCOPY FOR DETERMINING RANGELAND STRESSORS TO WESTERN WATERSHEDS

    EPA Science Inventory

    The Environmental Protection Agency is developing rangeland ecological indicators in twelve western states using advanced remote sensing techniques. Fine spectral resolution (hyperspectral) sensors, or imaging spectrometers, can detect the subtle spectral features that make veget...

  12. IMAGING SPECTROSCOPY FOR DETERMINING RANGELAND STRESSORS TO WESTERN WATERSHEDS

    EPA Science Inventory

    The Environmental Protection Agency is developing rangeland ecological indicators in eleven western states using advanced remote sensing systems. Fine spectral resolution (hyperspemal) sensors, or imaging spectrometers, can detect the subtle spectral features that makes vegetatio...

  13. Enhancing hyperspectral spatial resolution using multispectral image fusion: A wavelet approach

    NASA Astrophysics Data System (ADS)

    Jazaeri, Amin

    High spectral and spatial resolution images have a significant impact in remote sensing applications. Because both spatial and spectral resolutions of spaceborne sensors are fixed by design and it is not possible to further increase the spatial or spectral resolution, techniques such as image fusion must be applied to achieve such goals. This dissertation introduces the concept of wavelet fusion between hyperspectral and multispectral sensors in order to enhance the spectral and spatial resolution of a hyperspectral image. To test the robustness of this concept, images from Hyperion (hyperspectral sensor) and Advanced Land Imager (multispectral sensor) were first co-registered and then fused using different wavelet algorithms. A regression-based fusion algorithm was also implemented for comparison purposes. The results show that the fused images using a combined bi-linear wavelet-regression algorithm have less error than other methods when compared to the ground truth. In addition, a combined regression-wavelet algorithm shows more immunity to misalignment of the pixels due to the lack of proper registration. The quantitative measures of average mean square error show that the performance of wavelet-based methods degrades when the spatial resolution of hyperspectral images becomes eight times less than its corresponding multispectral image. Regardless of what method of fusion is utilized, the main challenge in image fusion is image registration, which is also a very time intensive process. Because the combined regression wavelet technique is computationally expensive, a hybrid technique based on regression and wavelet methods was also implemented to decrease computational overhead. However, the gain in faster computation was offset by the introduction of more error in the outcome. The secondary objective of this dissertation is to examine the feasibility and sensor requirements for image fusion for future NASA missions in order to be able to perform onboard image fusion. In this process, the main challenge of image registration was resolved by registering the input images using transformation matrices of previously acquired data. The composite image resulted from the fusion process remarkably matched the ground truth, indicating the possibility of real time onboard fusion processing.

  14. On-orbit characterization of hyperspectral imagers

    NASA Astrophysics Data System (ADS)

    McCorkel, Joel

    Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne- and satellite-based sensors. Often, ground-truth measurements at these tests sites are not always successful due to weather and funding availability. Therefore, RSG has also employed automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor. This dissertation presents a method for determining the radiometric calibration of a hyperspectral imager using multispectral imagery. The work relies on a multispectral sensor, Moderate-resolution Imaging Spectroradiometer (MODIS), as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. A method to predict hyperspectral surface reflectance using a combination of MODIS data and spectral shape information is developed and applied for the characterization of Hyperion. Spectral shape information is based on RSG's historical in situ data for the Railroad Valley test site and spectral library data for the Libyan test site. Average atmospheric parameters, also based on historical measurements, are used in reflectance prediction and transfer to space. Results of several cross-calibration scenarios that differ in image acquisition coincidence, test site, and reference sensor are found for the characterization of Hyperion. These are compared with results from the reflectance-based approach of vicarious calibration, a well-documented method developed by the RSG that serves as a baseline for calibration performance for the cross-calibration method developed here. Cross-calibration provides results that are within 2% of those of reflectance-based results in most spectral regions. Larger disagreements exist for shorter wavelengths studied in this work as well as in spectral areas that experience absorption by the atmosphere.

  15. Method and algorithm for efficient calibration of compressive hyperspectral imaging system based on a liquid crystal retarder

    NASA Astrophysics Data System (ADS)

    Shecter, Liat; Oiknine, Yaniv; August, Isaac; Stern, Adrian

    2017-09-01

    Recently we presented a Compressive Sensing Miniature Ultra-spectral Imaging System (CS-MUSI)1 . This system consists of a single Liquid Crystal (LC) phase retarder as a spectral modulator and a gray scale sensor array to capture a multiplexed signal of the imaged scene. By designing the LC spectral modulator in compliance with the Compressive Sensing (CS) guidelines and applying appropriate algorithms we demonstrated reconstruction of spectral (hyper/ ultra) datacubes from an order of magnitude fewer samples than taken by conventional sensors. The LC modulator is designed to have an effective width of a few tens of micrometers, therefore it is prone to imperfections and spatial nonuniformity. In this work, we present the study of this nonuniformity and present a mathematical algorithm that allows the inference of the spectral transmission over the entire cell area from only a few calibration measurements.

  16. Multichannel imager for littoral zone characterization

    NASA Astrophysics Data System (ADS)

    Podobna, Yuliya; Schoonmaker, Jon; Dirbas, Joe; Sofianos, James; Boucher, Cynthia; Gilbert, Gary

    2010-04-01

    This paper describes an approach to utilize a multi-channel, multi-spectral electro-optic (EO) system for littoral zone characterization. Advanced Coherent Technologies, LLC (ACT) presents their EO sensor systems for the surf zone environmental assessment and potential surf zone target detection. Specifically, an approach is presented to determine a Surf Zone Index (SZI) from the multi-spectral EO sensor system. SZI provides a single quantitative value of the surf zone conditions delivering an immediate understanding of the area and an assessment as to how well an airborne optical system might perform in a mine countermeasures (MCM) operation. Utilizing consecutive frames of SZI images, ACT is able to measure variability over time. A surf zone nomograph, which incorporates targets, sensor, and environmental data, including the SZI to determine the environmental impact on system performance, is reviewed in this work. ACT's electro-optical multi-channel, multi-spectral imaging system and test results are presented and discussed.

  17. Continuous non-invasive blood glucose monitoring by spectral image differencing method

    NASA Astrophysics Data System (ADS)

    Huang, Hao; Liao, Ningfang; Cheng, Haobo; Liang, Jing

    2018-01-01

    Currently, the use of implantable enzyme electrode sensor is the main method for continuous blood glucose monitoring. But the effect of electrochemical reactions and the significant drift caused by bioelectricity in body will reduce the accuracy of the glucose measurements. So the enzyme-based glucose sensors need to be calibrated several times each day by the finger-prick blood corrections. This increases the patient's pain. In this paper, we proposed a method for continuous Non-invasive blood glucose monitoring by spectral image differencing method in the near infrared band. The method uses a high-precision CCD detector to switch the filter in a very short period of time, obtains the spectral images. And then by using the morphological method to obtain the spectral image differences, the dynamic change of blood sugar is reflected in the image difference data. Through the experiment proved that this method can be used to monitor blood glucose dynamically to a certain extent.

  18. Application of Sensor Fusion to Improve Uav Image Classification

    NASA Astrophysics Data System (ADS)

    Jabari, S.; Fathollahi, F.; Zhang, Y.

    2017-08-01

    Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  19. Radiometric cross-calibration of the Terra MODIS and Landsat 7 ETM+ using an invariant desert site

    USGS Publications Warehouse

    Choi, T.; Angal, A.; Chander, G.; Xiong, X.

    2008-01-01

    A methodology for long-term radiometric cross-calibration between the Terra Moderate Resolution Imaging Spectroradiometer (MODIS) and Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) sensors was developed. The approach involves calibration of near-simultaneous surface observations between 2000 and 2007. Fifty-seven cloud-free image pairs were carefully selected over the Libyan desert for this study. The Libyan desert site (+28.55??, +23.39??), located in northern Africa, is a high reflectance site with high spatial, spectral, and temporal uniformity. Because the test site covers about 12 kmx13 km, accurate geometric preprocessing is required to match the footprint size between the two sensors to avoid uncertainties due to residual image misregistration. MODIS Level IB radiometrically corrected products were reprojected to the corresponding ETM+ image's Universal Transverse Mercator (UTM) grid projection. The 30 m pixels from the ETM+ images were aggregated to match the MODIS spatial resolution (250 m in Bands 1 and 2, or 500 m in Bands 3 to 7). The image data from both sensors were converted to absolute units of at-sensor radiance and top-ofatmosphere (TOA) reflectance for the spectrally matching band pairs. For each band pair, a set of fitted coefficients (slope and offset) is provided to quantify the relationships between the testing sensors. This work focuses on long-term stability and correlation of the Terra MODIS and L7 ETM+ sensors using absolute calibration results over the entire mission of the two sensors. Possible uncertainties are also discussed such as spectral differences in matching band pairs, solar zenith angle change during a collection, and differences in solar irradiance models.

  20. Infrared hyperspectral imaging miniaturized for UAV applications

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Hinnrichs, Bradford; McCutchen, Earl

    2017-02-01

    Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera, both MWIR and LWIR, small enough to serve as a payload on a miniature unmanned aerial vehicles. The optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of the sensor. This new and innovative approach to infrared hyperspectral imaging spectrometer uses micro-optics and will be explained in this paper. The micro-optics are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a mini-UAV or commercial quadcopter. Also, an example of how this technology can easily be used to quantify a hydrocarbon gas leak's volume and mass flowrates. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the spatial resolution. A 2 x 2 lenslet array will image four different spectral images of the scene each frame and when coupled with a 512 x 512 focal plane array will give spatial resolution of 256 x 256 pixel each spectral image. Another system that we developed uses a 4 x 4 lenslet array on a 1024 x 1024 pixel element focal plane array which gives 16 spectral images of 256 x 256 pixel resolution each frame.

  1. Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping.

    PubMed

    Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca

    2015-08-12

    Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights.

  2. Spatial Quality Evaluation of Resampled Unmanned Aerial Vehicle-Imagery for Weed Mapping

    PubMed Central

    Borra-Serrano, Irene; Peña, José Manuel; Torres-Sánchez, Jorge; Mesas-Carrascosa, Francisco Javier; López-Granados, Francisca

    2015-01-01

    Unmanned aerial vehicles (UAVs) combined with different spectral range sensors are an emerging technology for providing early weed maps for optimizing herbicide applications. Considering that weeds, at very early phenological stages, are similar spectrally and in appearance, three major components are relevant: spatial resolution, type of sensor and classification algorithm. Resampling is a technique to create a new version of an image with a different width and/or height in pixels, and it has been used in satellite imagery with different spatial and temporal resolutions. In this paper, the efficiency of resampled-images (RS-images) created from real UAV-images (UAV-images; the UAVs were equipped with two types of sensors, i.e., visible and visible plus near-infrared spectra) captured at different altitudes is examined to test the quality of the RS-image output. The performance of the object-based-image-analysis (OBIA) implemented for the early weed mapping using different weed thresholds was also evaluated. Our results showed that resampling accurately extracted the spectral values from high spatial resolution UAV-images at an altitude of 30 m and the RS-image data at altitudes of 60 and 100 m, was able to provide accurate weed cover and herbicide application maps compared with UAV-images from real flights. PMID:26274960

  3. a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging

    NASA Astrophysics Data System (ADS)

    Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.

    2017-08-01

    Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.

  4. Wide field-of-view dual-band multispectral muzzle flash detection

    NASA Astrophysics Data System (ADS)

    Montoya, J.; Melchor, J.; Spiliotis, P.; Taplin, L.

    2013-06-01

    Sensor technologies are undergoing revolutionary advances, as seen in the rapid growth of multispectral methodologies. Increases in spatial, spectral, and temporal resolution, and in breadth of spectral coverage, render feasible sensors that function with unprecedented performance. A system was developed that addresses many of the key hardware requirements for a practical dual-band multispectral acquisition system, including wide field of view and spectral/temporal shift between dual bands. The system was designed using a novel dichroic beam splitter and dual band-pass filter configuration that creates two side-by-side images of a scene on a single sensor. A high-speed CMOS sensor was used to simultaneously capture data from the entire scene in both spectral bands using a short focal-length lens that provided a wide field-of-view. The beam-splitter components were arranged such that the two images were maintained in optical alignment and real-time intra-band processing could be carried out using only simple arithmetic on the image halves. An experiment related to limitations of the system to address multispectral detection requirements was performed. This characterized the system's low spectral variation across its wide field of view. This paper provides lessons learned on the general limitation of key hardware components required for multispectral muzzle flash detection, using the system as a hardware example combined with simulated multispectral muzzle flash and background signatures.

  5. Imaging spectroscopy using embedded diffractive optical arrays

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Hinnrichs, Bradford

    2017-09-01

    Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera based on diffractive optic arrays. This approach to hyperspectral imaging has been demonstrated in all three infrared bands SWIR, MWIR and LWIR. The hyperspectral optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of this infrared hyperspectral sensor. This new and innovative approach to an infrared hyperspectral imaging spectrometer uses micro-optics that are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a small satellite, mini-UAV, commercial quadcopter or man portable. Also, an application of how this spectral imaging technology can easily be used to quantify the mass and volume flow rates of hydrocarbon gases. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. The detector array is divided into sub-images covered by each lenslet. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the number of simultaneous different spectral images collected each frame of the camera. A 2 x 2 lenslet array will image four different spectral images of the scene each frame and when coupled with a 512 x 512 focal plane array will give spatial resolution of 256 x 256 pixel each spectral image. Another system that we developed uses a 4 x 4 lenslet array on a 1024 x 1024 pixel element focal plane array which gives 16 spectral images of 256 x 256 pixel resolution each frame. This system spans the SWIR and MWIR bands with a single optical array and focal plane array.

  6. Landsat multispectral sharpening using a sensor system model and panchromatic image

    USGS Publications Warehouse

    Lemeshewsky, G.P.; ,

    2003-01-01

    The thematic mapper (TM) sensor aboard Landsats 4, 5 and enhanced TM plus (ETM+) on Landsat 7 collect imagery at 30-m sample distance in six spectral bands. New with ETM+ is a 15-m panchromatic (P) band. With image sharpening techniques, this higher resolution P data, or as an alternative, the 10-m (or 5-m) P data of the SPOT satellite, can increase the spatial resolution of the multispectral (MS) data. Sharpening requires that the lower resolution MS image be coregistered and resampled to the P data before high spatial frequency information is transferred to the MS data. For visual interpretation and machine classification tasks, it is important that the sharpened data preserve the spectral characteristics of the original low resolution data. A technique was developed for sharpening (in this case, 3:1 spatial resolution enhancement) visible spectral band data, based on a model of the sensor system point spread function (PSF) in order to maintain spectral fidelity. It combines high-pass (HP) filter sharpening methods with iterative image restoration to reduce degradations caused by sensor-system-induced blurring and resembling. Also there is a spectral fidelity requirement: sharpened MS when filtered by the modeled degradations should reproduce the low resolution source MS. Quantitative evaluation of sharpening performance was made by using simulated low resolution data generated from digital color-IR aerial photography. In comparison to the HP-filter-based sharpening method, results for the technique in this paper with simulated data show improved spectral fidelity. Preliminary results with TM 30-m visible band data sharpened with simulated 10-m panchromatic data are promising but require further study.

  7. Accurate reconstruction of hyperspectral images from compressive sensing measurements

    NASA Astrophysics Data System (ADS)

    Greer, John B.; Flake, J. C.

    2013-05-01

    The emerging field of Compressive Sensing (CS) provides a new way to capture data by shifting the heaviest burden of data collection from the sensor to the computer on the user-end. This new means of sensing requires fewer measurements for a given amount of information than traditional sensors. We investigate the efficacy of CS for capturing HyperSpectral Imagery (HSI) remotely. We also introduce a new family of algorithms for constructing HSI from CS measurements with Split Bregman Iteration [Goldstein and Osher,2009]. These algorithms combine spatial Total Variation (TV) with smoothing in the spectral dimension. We examine models for three different CS sensors: the Coded Aperture Snapshot Spectral Imager-Single Disperser (CASSI-SD) [Wagadarikar et al.,2008] and Dual Disperser (CASSI-DD) [Gehm et al.,2007] cameras, and a hypothetical random sensing model closer to CS theory, but not necessarily implementable with existing technology. We simulate the capture of remotely sensed images by applying the sensor forward models to well-known HSI scenes - an AVIRIS image of Cuprite, Nevada and the HYMAP Urban image. To measure accuracy of the CS models, we compare the scenes constructed with our new algorithm to the original AVIRIS and HYMAP cubes. The results demonstrate the possibility of accurately sensing HSI remotely with significantly fewer measurements than standard hyperspectral cameras.

  8. Use of the Airborne Visible/Infrared Imaging Spectrometer to calibrate the optical sensor on board the Japanese Earth Resources Satellite-1

    NASA Technical Reports Server (NTRS)

    Green, Robert O.; Conel, James E.; Vandenbosch, Jeannette; Shimada, Masanobu

    1993-01-01

    We describe an experiment to calibrate the optical sensor (OPS) on board the Japanese Earth Resources Satellite-1 with data acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). On 27 Aug. 1992 both the OPS and AVIRIS acquired data concurrently over a calibration target on the surface of Rogers Dry Lake, California. The high spectral resolution measurements of AVIRIS have been convolved to the spectral response curves of the OPS. These data in conjunction with the corresponding OPS digitized numbers have been used to generate the radiometric calibration coefficients for the eight OPS bands. This experiment establishes the suitability of AVIRIS for the calibration of spaceborne sensors in the 400 to 2500 nm spectral region.

  9. Classification of Hyperspectral or Trichromatic Measurements of Ocean Color Data into Spectral Classes.

    PubMed

    Prasad, Dilip K; Agarwal, Krishna

    2016-03-22

    We propose a method for classifying radiometric oceanic color data measured by hyperspectral satellite sensors into known spectral classes, irrespective of the downwelling irradiance of the particular day, i.e., the illumination conditions. The focus is not on retrieving the inherent optical properties but to classify the pixels according to the known spectral classes of the reflectances from the ocean. The method compensates for the unknown downwelling irradiance by white balancing the radiometric data at the ocean pixels using the radiometric data of bright pixels (typically from clouds). The white-balanced data is compared with the entries in a pre-calibrated lookup table in which each entry represents the spectral properties of one class. The proposed approach is tested on two datasets of in situ measurements and 26 different daylight illumination spectra for medium resolution imaging spectrometer (MERIS), moderate-resolution imaging spectroradiometer (MODIS), sea-viewing wide field-of-view sensor (SeaWiFS), coastal zone color scanner (CZCS), ocean and land colour instrument (OLCI), and visible infrared imaging radiometer suite (VIIRS) sensors. Results are also shown for CIMEL's SeaPRISM sun photometer sensor used on-board field trips. Accuracy of more than 92% is observed on the validation dataset and more than 86% is observed on the other dataset for all satellite sensors. The potential of applying the algorithms to non-satellite and non-multi-spectral sensors mountable on airborne systems is demonstrated by showing classification results for two consumer cameras. Classification on actual MERIS data is also shown. Additional results comparing the spectra of remote sensing reflectance with level 2 MERIS data and chlorophyll concentration estimates of the data are included.

  10. Designing a practical system for spectral imaging of skylight.

    PubMed

    López-Alvarez, Miguel A; Hernández-Andrés, Javier; Romero, Javier; Lee, Raymond L

    2005-09-20

    In earlier work [J. Opt. Soc. Am. A 21, 13-23 (2004)], we showed that a combination of linear models and optimum Gaussian sensors obtained by an exhaustive search can recover daylight spectra reliably from broadband sensor data. Thus our algorithm and sensors could be used to design an accurate, relatively inexpensive system for spectral imaging of daylight. Here we improve our simulation of the multispectral system by (1) considering the different kinds of noise inherent in electronic devices such as change-coupled devices (CCDs) or complementary metal-oxide semiconductors (CMOS) and (2) extending our research to a different kind of natural illumination, skylight. Because exhaustive searches are expensive computationally, here we switch to a simulated annealing algorithm to define the optimum sensors for recovering skylight spectra. The annealing algorithm requires us to minimize a single cost function, and so we develop one that calculates both the spectral and colorimetric similarity of any pair of skylight spectra. We show that the simulated annealing algorithm yields results similar to the exhaustive search but with much less computational effort. Our technique lets us study the properties of optimum sensors in the presence of noise, one side effect of which is that adding more sensors may not improve the spectral recovery.

  11. Multi sensor satellite imagers for commercial remote sensing

    NASA Astrophysics Data System (ADS)

    Cronje, T.; Burger, H.; Du Plessis, J.; Du Toit, J. F.; Marais, L.; Strumpfer, F.

    2005-10-01

    This paper will discuss and compare recent refractive and catodioptric imager designs developed and manufactured at SunSpace for Multi Sensor Satellite Imagers with Panchromatic, Multi-spectral, Area and Hyperspectral sensors on a single Focal Plane Array (FPA). These satellite optical systems were designed with applications to monitor food supplies, crop yield and disaster monitoring in mind. The aim of these imagers is to achieve medium to high resolution (2.5m to 15m) spatial sampling, wide swaths (up to 45km) and noise equivalent reflectance (NER) values of less than 0.5%. State-of-the-art FPA designs are discussed and address the choice of detectors to achieve these performances. Special attention is given to thermal robustness and compactness, the use of folding prisms to place multiple detectors in a large FPA and a specially developed process to customize the spectral selection with the need to minimize mass, power and cost. A refractive imager with up to 6 spectral bands (6.25m GSD) and a catodioptric imager with panchromatic (2.7m GSD), multi-spectral (6 bands, 4.6m GSD), hyperspectral (400nm to 2.35μm, 200 bands, 15m GSD) sensors on the same FPA will be discussed. Both of these imagers are also equipped with real time video view finding capabilities. The electronic units could be subdivided into the Front-End Electronics and Control Electronics with analogue and digital signal processing. A dedicated Analogue Front-End is used for Correlated Double Sampling (CDS), black level correction, variable gain and up to 12-bit digitizing and high speed LVDS data link to a mass memory unit.

  12. Evolution of miniature detectors and focal plane arrays for infrared sensors

    NASA Astrophysics Data System (ADS)

    Watts, Louis A.

    1993-06-01

    Sensors that are sensitive in the infrared spectral region have been under continuous development since the WW2 era. A quest for the military advantage of 'seeing in the dark' has pushed thermal imaging technology toward high spatial and temporal resolution for night vision equipment, fire control, search track, and seeker 'homing' guidance sensing devices. Similarly, scientific applications have pushed spectral resolution for chemical analysis, remote sensing of earth resources, and astronomical exploration applications. As a result of these developments, focal plane arrays (FPA) are now available with sufficient sensitivity for both high spatial and narrow bandwidth spectral resolution imaging over large fields of view. Such devices combined with emerging opto-electronic developments in integrated FPA data processing techniques can yield miniature sensors capable of imaging reflected sunlight in the near IR and emitted thermal energy in the Mid-wave (MWIR) and longwave (LWIR) IR spectral regions. Robotic space sensors equipped with advanced versions of these FPA's will provide high resolution 'pictures' of their surroundings, perform remote analysis of solid, liquid, and gas matter, or selectively look for 'signatures' of specific objects. Evolutionary trends and projections of future low power micro detector FPA developments for day/night operation or use in adverse viewing conditions are presented in the following test.

  13. Evolution of miniature detectors and focal plane arrays for infrared sensors

    NASA Technical Reports Server (NTRS)

    Watts, Louis A.

    1993-01-01

    Sensors that are sensitive in the infrared spectral region have been under continuous development since the WW2 era. A quest for the military advantage of 'seeing in the dark' has pushed thermal imaging technology toward high spatial and temporal resolution for night vision equipment, fire control, search track, and seeker 'homing' guidance sensing devices. Similarly, scientific applications have pushed spectral resolution for chemical analysis, remote sensing of earth resources, and astronomical exploration applications. As a result of these developments, focal plane arrays (FPA) are now available with sufficient sensitivity for both high spatial and narrow bandwidth spectral resolution imaging over large fields of view. Such devices combined with emerging opto-electronic developments in integrated FPA data processing techniques can yield miniature sensors capable of imaging reflected sunlight in the near IR and emitted thermal energy in the Mid-wave (MWIR) and longwave (LWIR) IR spectral regions. Robotic space sensors equipped with advanced versions of these FPA's will provide high resolution 'pictures' of their surroundings, perform remote analysis of solid, liquid, and gas matter, or selectively look for 'signatures' of specific objects. Evolutionary trends and projections of future low power micro detector FPA developments for day/night operation or use in adverse viewing conditions are presented in the following test.

  14. Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder

    NASA Astrophysics Data System (ADS)

    August, Isaac; Oiknine, Yaniv; Abuleil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian

    2016-03-01

    Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.

  15. Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder.

    PubMed

    August, Isaac; Oiknine, Yaniv; AbuLeil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian

    2016-03-23

    Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.

  16. Hyperspectral image classification by a variable interval spectral average and spectral curve matching combined algorithm

    NASA Astrophysics Data System (ADS)

    Senthil Kumar, A.; Keerthi, V.; Manjunath, A. S.; Werff, Harald van der; Meer, Freek van der

    2010-08-01

    Classification of hyperspectral images has been receiving considerable attention with many new applications reported from commercial and military sectors. Hyperspectral images are composed of a large number of spectral channels, and have the potential to deliver a great deal of information about a remotely sensed scene. However, in addition to high dimensionality, hyperspectral image classification is compounded with a coarse ground pixel size of the sensor for want of adequate sensor signal to noise ratio within a fine spectral passband. This makes multiple ground features jointly occupying a single pixel. Spectral mixture analysis typically begins with pixel classification with spectral matching techniques, followed by the use of spectral unmixing algorithms for estimating endmembers abundance values in the pixel. The spectral matching techniques are analogous to supervised pattern recognition approaches, and try to estimate some similarity between spectral signatures of the pixel and reference target. In this paper, we propose a spectral matching approach by combining two schemes—variable interval spectral average (VISA) method and spectral curve matching (SCM) method. The VISA method helps to detect transient spectral features at different scales of spectral windows, while the SCM method finds a match between these features of the pixel and one of library spectra by least square fitting. Here we also compare the performance of the combined algorithm with other spectral matching techniques using a simulated and the AVIRIS hyperspectral data sets. Our results indicate that the proposed combination technique exhibits a stronger performance over the other methods in the classification of both the pure and mixed class pixels simultaneously.

  17. Radiometric characterization of hyperspectral imagers using multispectral sensors

    NASA Astrophysics Data System (ADS)

    McCorkel, Joel; Thome, Kurt; Leisso, Nathan; Anderson, Nikolaus; Czapla-Myers, Jeff

    2009-08-01

    The Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne and satellite based sensors. Often, ground-truth measurements at these tests sites are not always successful due to weather and funding availability. Therefore, RSG has also employed automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor. This work studies the feasibility of determining the radiometric calibration of a hyperspectral imager using multispectral imagery. The work relies on the Moderate Resolution Imaging Spectroradiometer (MODIS) as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. Hyperion bands are compared to MODIS by band averaging Hyperion's high spectral resolution data with the relative spectral response of MODIS. The results compare cross-calibration scenarios that differ in image acquisition coincidence, test site used for the calibration, and reference sensor. Cross-calibration results are presented that show agreement between the use of coincident and non-coincident image pairs within 2% in most bands as well as similar agreement between results that employ the different MODIS sensors as a reference.

  18. Radiometric Characterization of Hyperspectral Imagers using Multispectral Sensors

    NASA Technical Reports Server (NTRS)

    McCorkel, Joel; Kurt, Thome; Leisso, Nathan; Anderson, Nikolaus; Czapla-Myers, Jeff

    2009-01-01

    The Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne and satellite based sensors. Often, ground-truth measurements at these test sites are not always successful due to weather and funding availability. Therefore, RSG has also automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor, This work studies the feasibility of determining the radiometric calibration of a hyperspectral imager using multispectral a imagery. The work relies on the Moderate Resolution Imaging Spectroradiometer (M0DIS) as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. Hyperion bands are compared to MODIS by band averaging Hyperion's high spectral resolution data with the relative spectral response of M0DlS. The results compare cross-calibration scenarios that differ in image acquisition coincidence, test site used for the calibration, and reference sensor. Cross-calibration results are presented that show agreement between the use of coincident and non-coincident image pairs within 2% in most brands as well as similar agreement between results that employ the different MODIS sensors as a reference.

  19. On the Challenge of Observing Pelagic Sargassum in Coastal Oceans: A Multi-sensor Assessment

    NASA Astrophysics Data System (ADS)

    Hu, C.; Feng, L.; Hardy, R.; Hochberg, E. J.

    2016-02-01

    Remote detection of pelagic Sargassum is often hindered by its spectral similarity to other floating materials and by the inadequate spatial resolution. Using measurements from multi-spectral satellite sensors (Moderate Resolution Imaging Spectroradiometer or MODIS), Landsat, WorldView-2 (or WV-2) as well as hyperspectral sensors (Hyperspectral Imager for the Coastal Ocean or HICO, Airborne Visible-InfraRed Imaging Spectrometer or AVIRIS) and airborne digital photos, we analyze and compare their ability (in terms of spectral and spatial resolutions) to detect Sargassum and to differentiate from other floating materials such as Trichodesmium, Syringodium, Ulva, garbage, and emulsified oil. Field measurements suggest that Sargassum has a distinctive reflectance curvature around 630 nm due to its chlorophyll c pigments, which provides a unique spectral signature when combined with the reflectance ratio between brown ( 650 nm) and green ( 555 nm) wavelengths. For a 10-nm resolution sensor on the hyperspectral HyspIRI mission currently being planned by NASA, a stepwise rule to examine several indexes established from 6 bands (centered at 555, 605, 625, 645, 685, 755 nm) is shown to be effective to unambiguously differentiate Sargassum from all other floating materials Numerical simulations using spectral endmembers and noise in the satellite-derived reflectance suggest that spectral discrimination is degraded when a pixel is mixed between Sargassum and water. A minimum of 20-30% Sargassum coverage within a pixel is required to retain such ability, while the partial coverage can be as low as 1-2% when detecting floating materials without spectral discrimination. With its expected signal-to-noise ratios (SNRs 200:1), the hyperspectral HyspIRI mission may provide a compromise between spatial resolution and spatial coverage to improve our capacity to detect, discriminate, and quantify Sargassum.

  20. Analysis of In-Situ Spectral Reflectance of Sago and Other Palms: Implications for Their Detection in Optical Satellite Images

    NASA Astrophysics Data System (ADS)

    Rendon Santillan, Jojene; Makinano-Santillan, Meriam

    2018-04-01

    We present a characterization, comparison and analysis of in-situ spectral reflectance of Sago and other palms (coconut, oil palm and nipa) to ascertain on which part of the electromagnetic spectrum these palms are distinguishable from each other. The analysis also aims to reveal information that will assist in selecting which band to use when mapping Sago palms using the images acquired by these sensors. The datasets used in the analysis consisted of averaged spectral reflectance curves of each palm species measured within the 345-1045 nm wavelength range using an Ocean Optics USB4000-VIS-NIR Miniature Fiber Optic Spectrometer. This in-situ reflectance data was also resampled to match the spectral response of the 4 bands of ALOS AVNIR-2, 3 bands of ASTER VNIR, 4 bands of Landsat 7 ETM+, 5 bands of Landsat 8, and 8 bands of Worldview-2 (WV2). Examination of the spectral reflectance curves showed that the near infra-red region, specifically at 770, 800 and 875 nm, provides the best wavelengths where Sago palms can be distinguished from other palms. The resampling of the in-situ reflectance spectra to match the spectral response of optical sensors made possible the analysis of the differences in reflectance values of Sago and other palms in different bands of the sensors. Overall, the knowledge learned from the analysis can be useful in the actual analysis of optical satellite images, specifically in determining which band to include or to exclude, or whether to use all bands of a sensor in discriminating and mapping Sago palms.

  1. Atmospheric correction for hyperspectral ocean color sensors

    NASA Astrophysics Data System (ADS)

    Ibrahim, A.; Ahmad, Z.; Franz, B. A.; Knobelspiesse, K. D.

    2017-12-01

    NASA's heritage Atmospheric Correction (AC) algorithm for multi-spectral ocean color sensors is inadequate for the new generation of spaceborne hyperspectral sensors, such as NASA's first hyperspectral Ocean Color Instrument (OCI) onboard the anticipated Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) satellite mission. The AC process must estimate and remove the atmospheric path radiance contribution due to the Rayleigh scattering by air molecules and by aerosols from the measured top-of-atmosphere (TOA) radiance. Further, it must also compensate for the absorption by atmospheric gases and correct for reflection and refraction of the air-sea interface. We present and evaluate an improved AC for hyperspectral sensors beyond the heritage approach by utilizing the additional spectral information of the hyperspectral sensor. The study encompasses a theoretical radiative transfer sensitivity analysis as well as a practical application of the Hyperspectral Imager for the Coastal Ocean (HICO) and the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensors.

  2. Integrative Multi-Spectral Sensor Device for Far-Infrared and Visible Light Fusion

    NASA Astrophysics Data System (ADS)

    Qiao, Tiezhu; Chen, Lulu; Pang, Yusong; Yan, Gaowei

    2018-06-01

    Infrared and visible light image fusion technology is a hot spot in the research of multi-sensor fusion technology in recent years. Existing infrared and visible light fusion technologies need to register before fusion because of using two cameras. However, the application effect of the registration technology has yet to be improved. Hence, a novel integrative multi-spectral sensor device is proposed for infrared and visible light fusion, and by using the beam splitter prism, the coaxial light incident from the same lens is projected to the infrared charge coupled device (CCD) and visible light CCD, respectively. In this paper, the imaging mechanism of the proposed sensor device is studied with the process of the signals acquisition and fusion. The simulation experiment, which involves the entire process of the optic system, signal acquisition, and signal fusion, is constructed based on imaging effect model. Additionally, the quality evaluation index is adopted to analyze the simulation result. The experimental results demonstrate that the proposed sensor device is effective and feasible.

  3. Detection of Special Operations Forces Using Night Vision Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, C.M.

    2001-10-22

    Night vision devices, such image intensifiers and infrared imagers, are readily available to a host of nations, organizations, and individuals through international commerce. Once the trademark of special operations units, these devices are widely advertised to ''turn night into day''. In truth, they cannot accomplish this formidable task, but they do offer impressive enhancement of vision in limited light scenarios through electronically generated images. Image intensifiers and infrared imagers are both electronic devices for enhancing vision in the dark. However, each is based upon a totally different physical phenomenon. Image intensifiers amplify the available light energy whereas infrared imagers detectmore » the thermal energy radiated from all objects. Because of this, each device operates from energy which is present in a different portion of the electromagnetic spectrum. This leads to differences in the ability of each device to detect and/or identify objects. This report is a compilation of the available information on both state-of-the-art image intensifiers and infrared imagers. Image intensifiers developed in the United States, as well as some foreign made image intensifiers, are discussed. Image intensifiers are categorized according to their spectral response and sensitivity using the nomenclature of GEN I, GEN II, and GEN III. As the first generation of image intensifiers, GEN I, were large and of limited performance, this report will deal with only GEN II and GEN III equipment. Infrared imagers are generally categorized according to their spectral response, sensor materials, and related sensor operating temperature using the nomenclature Medium Wavelength Infrared (MWIR) Cooled and Long Wavelength Infrared (LWIR) Uncooled. MWIR Cooled refers to infrared imagers which operate in the 3 to 5 {micro}m wavelength electromagnetic spectral region and require either mechanical or thermoelectric coolers to keep the sensors operating at 77 K. LWIR Uncooled refers to infrared imagers which operate in the 8 to 12 {micro}m wavelength electromagnetic spectral region and do not require cooling below room temperature. Both commercial and military infrared sensors of these two types are discussed.« less

  4. Assessment of spectral, misregistration, and spatial uncertainties inherent in the cross-calibration study

    USGS Publications Warehouse

    Chander, G.; Helder, D.L.; Aaron, David; Mishra, N.; Shrestha, A.K.

    2013-01-01

    Cross-calibration of satellite sensors permits the quantitative comparison of measurements obtained from different Earth Observing (EO) systems. Cross-calibration studies usually use simultaneous or near-simultaneous observations from several spaceborne sensors to develop band-by-band relationships through regression analysis. The investigation described in this paper focuses on evaluation of the uncertainties inherent in the cross-calibration process, including contributions due to different spectral responses, spectral resolution, spectral filter shift, geometric misregistrations, and spatial resolutions. The hyperspectral data from the Environmental Satellite SCanning Imaging Absorption SpectroMeter for Atmospheric CartograpHY and the EO-1 Hyperion, along with the relative spectral responses (RSRs) from the Landsat 7 Enhanced Thematic Mapper (TM) Plus and the Terra Moderate Resolution Imaging Spectroradiometer sensors, were used for the spectral uncertainty study. The data from Landsat 5 TM over five representative land cover types (desert, rangeland, grassland, deciduous forest, and coniferous forest) were used for the geometric misregistrations and spatial-resolution study. The spectral resolution uncertainty was found to be within 0.25%, spectral filter shift within 2.5%, geometric misregistrations within 0.35%, and spatial-resolution effects within 0.1% for the Libya 4 site. The one-sigma uncertainties presented in this paper are uncorrelated, and therefore, the uncertainties can be summed orthogonally. Furthermore, an overall total uncertainty was developed. In general, the results suggested that the spectral uncertainty is more dominant compared to other uncertainties presented in this paper. Therefore, the effect of the sensor RSR differences needs to be quantified and compensated to avoid large uncertainties in cross-calibration results.

  5. Functional Form of the Radiometric Equation for the SNPP VIIRS Reflective Solar Bands: An Initial Study

    NASA Technical Reports Server (NTRS)

    Lei, Ning; Xiong, Xiaoxiong

    2016-01-01

    The Visible Infrared Imaging Radiometer Suite (VIIRS) aboard the Suomi National Polar-orbiting Partnership (SNPP) satellite is a passive scanning radiometer and an imager, observing radiative energy from the Earth in 22 spectral bands from 0.41 to 12 microns which include 14 reflective solar bands (RSBs). Extending the formula used by the Moderate Resolution Imaging Spectroradiometer instruments, currently the VIIRS determines the sensor aperture spectral radiance through a quadratic polynomial of its detector digital count. It has been known that for the RSBs the quadratic polynomial is not adequate in the design specified spectral radiance region and using a quadratic polynomial could drastically increase the errors in the polynomial coefficients, leading to possible large errors in the determined aperture spectral radiance. In addition, it is very desirable to be able to extend the radiance calculation formula to correctly retrieve the aperture spectral radiance with the level beyond the design specified range. In order to more accurately determine the aperture spectral radiance from the observed digital count, we examine a few polynomials of the detector digital count to calculate the sensor aperture spectral radiance.

  6. Intelligent image processing for vegetation classification using multispectral LANDSAT data

    NASA Astrophysics Data System (ADS)

    Santos, Stewart R.; Flores, Jorge L.; Garcia-Torales, G.

    2015-09-01

    We propose an intelligent computational technique for analysis of vegetation imaging, which are acquired with multispectral scanner (MSS) sensor. This work focuses on intelligent and adaptive artificial neural network (ANN) methodologies that allow segmentation and classification of spectral remote sensing (RS) signatures, in order to obtain a high resolution map, in which we can delimit the wooded areas and quantify the amount of combustible materials present into these areas. This could provide important information to prevent fires and deforestation of wooded areas. The spectral RS input data, acquired by the MSS sensor, are considered in a random propagation remotely sensed scene with unknown statistics for each Thematic Mapper (TM) band. Performing high-resolution reconstruction and adding these spectral values with neighbor pixels information from each TM band, we can include contextual information into an ANN. The biggest challenge in conventional classifiers is how to reduce the number of components in the feature vector, while preserving the major information contained in the data, especially when the dimensionality of the feature space is high. Preliminary results show that the Adaptive Modified Neural Network method is a promising and effective spectral method for segmentation and classification in RS images acquired with MSS sensor.

  7. Quantification of Water Quality Parameters for the Wabash River Using Hyperspectral Remote Sensing

    NASA Astrophysics Data System (ADS)

    Tan, J.; Cherkauer, K. A.; Chaubey, I.

    2011-12-01

    Increasingly impaired water bodies in the agriculturally dominated Midwestern United States pose a risk to water supplies, aquatic ecology and contribute to the eutrophication of the Gulf of Mexico. Improving regional water quality calls for new techniques for monitoring and managing water quality over large river systems. Optical indicators of water quality enable a timely and cost-effective method for observing and quantifying water quality conditions by remote sensing. Compared to broad spectral sensors such as Landsat, which observe reflectance over limited spectral bands, hyperspectral sensors should have significant advantages in their ability to estimate water quality parameters because they are designed to split the spectral signature into hundreds of very narrow spectral bands increasing their ability to resolve optically sensitive water quality indicators. Two airborne hyperspectral images were acquired over the Wabash River using a ProSpecTIR-VS2 sensor system on May 15th, 2010. These images were analyzed together with concurrent in-stream water quality data collected to assess our ability to extract optically sensitive constituents. Utilizing the correlation between in-stream data and reflectance from the hyperspectral images, models were developed to estimate the concentrations of chlorophyll a, dissolved organic carbon and total suspended solids. Models were developed using the full array of hyperspectral bands, as well as Landsat bands synthesized by averaging hyperspectral bands within the Landsat spectral range. Higher R2 and lower RMSE values were found for the models taking full advantage of the hyperspectral sensor, supporting the conclusion that the hyperspectral sensor was better at predicting the in-stream concentrations of chlorophyll a, dissolved organic carbon and total suspended solids in the Wabash River. Results also suggest that predictive models may not be the same for the Wabash River as for its tributaries.

  8. Methods for gas detection using stationary hyperspectral imaging sensors

    DOEpatents

    Conger, James L [San Ramon, CA; Henderson, John R [Castro Valley, CA

    2012-04-24

    According to one embodiment, a method comprises producing a first hyperspectral imaging (HSI) data cube of a location at a first time using data from a HSI sensor; producing a second HSI data cube of the same location at a second time using data from the HSI sensor; subtracting on a pixel-by-pixel basis the second HSI data cube from the first HSI data cube to produce a raw difference cube; calibrating the raw difference cube to produce a calibrated raw difference cube; selecting at least one desired spectral band based on a gas of interest; producing a detection image based on the at least one selected spectral band and the calibrated raw difference cube; examining the detection image to determine presence of the gas of interest; and outputting a result of the examination. Other methods, systems, and computer program products for detecting the presence of a gas are also described.

  9. Imaging Science Panel. Multispectral Imaging Science Working Group joint meeting with Information Science Panel: Introduction

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The state-of-the-art of multispectral sensing is reviewed and recommendations for future research and development are proposed. specifically, two generic sensor concepts were discussed. One is the multispectral pushbroom sensor utilizing linear array technology which operates in six spectral bands including two in the SWIR region and incorporates capabilities for stereo and crosstrack pointing. The second concept is the imaging spectrometer (IS) which incorporates a dispersive element and area arrays to provide both spectral and spatial information simultaneously. Other key technology areas included very large scale integration and the computer aided design of these devices.

  10. Radiometric cross-calibration of EO-1 ALI with L7 ETM+ and Terra MODIS sensors using near-simultaneous desert observations

    USGS Publications Warehouse

    Chander, Gyanesh; Angal, Amit; Choi, Taeyoung; Xiong, Xiaoxiong

    2013-01-01

    The Earth Observing-1 (EO-1) satellite was launched on November 21, 2000, as part of a one-year technology demonstration mission. The mission was extended because of the value it continued to add to the scientific community. EO-1 has now been operational for more than a decade, providing both multispectral and hyperspectral measurements. As part of the EO-1 mission, the Advanced Land Imager (ALI) sensor demonstrates a potential technological direction for the next generation of Landsat sensors. To evaluate the ALI sensor capabilities as a precursor to the Operational Land Imager (OLI) onboard the Landsat Data Continuity Mission (LDCM, or Landsat 8 after launch), its measured top-of-atmosphere (TOA) reflectances were compared to the well-calibrated Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) and the Terra Moderate Resolution Imaging Spectroradiometer (MODIS) sensors in the reflective solar bands (RSB). These three satellites operate in a near-polar, sun-synchronous orbit 705 km above the Earth's surface. EO-1 was designed to fly one minute behind L7 and approximately 30 minutes in front of Terra. In this configuration, all the three sensors can view near-identical ground targets with similar atmospheric, solar, and viewing conditions. However, because of the differences in the relative spectral response (RSR), the measured physical quantities can be significantly different while observing the same target. The cross-calibration of ALI with ETM+ and MODIS was performed using near-simultaneous surface observations based on image statistics from areas observed by these sensors over four desert sites (Libya 4, Mauritania 2, Arabia 1, and Sudan 1). The differences in the measured TOA reflectances due to RSR mismatches were compensated by using a spectral band adjustment factor (SBAF), which takes into account the spectral profile of the target and the RSR of each sensor. For this study, the spectral profile of the target comes from the near-simultaneous EO-1 Hyperion data over these sites. The results indicate that the TOA reflectance measurements for ALI agree with those of ETM+ and MODIS to within 5% after the application of SBAF.

  11. Synthesis of Multispectral Bands from Hyperspectral Data: Validation Based on Images Acquired by AVIRIS, Hyperion, ALI, and ETM+

    NASA Technical Reports Server (NTRS)

    Blonski, Slawomir; Glasser, Gerald; Russell, Jeffrey; Ryan, Robert; Terrie, Greg; Zanoni, Vicki

    2003-01-01

    Spectral band synthesis is a key step in the process of creating a simulated multispectral image from hyperspectral data. In this step, narrow hyperspectral bands are combined into broader multispectral bands. Such an approach has been used quite often, but to the best of our knowledge accuracy of the band synthesis simulations has not been evaluated thus far. Therefore, the main goal of this paper is to provide validation of the spectral band synthesis algorithm used in the ART software. The next section contains a description of the algorithm and an example of its application. Using spectral responses of AVIRIS, Hyperion, ALI, and ETM+, the following section shows how the synthesized spectral bands compare with actual bands, and it presents an evaluation of the simulation accuracy based on results of MODTRAN modeling. In the final sections of the paper, simulated images are compared with data acquired by actual satellite sensors. First, a Landsat 7 ETM+ image is simulated using an AVIRIS hyperspectral data cube. Then, two datasets collected with the Hyperion instrument from the EO-1 satellite are used to simulate multispectral images from the ALI and ETM+ sensors.

  12. Role of Imaging Specrometer Data for Model-based Cross-calibration of Imaging Sensors

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis John

    2014-01-01

    Site characterization benefits from imaging spectrometry to determine spectral bi-directional reflectance of a well-understood surface. Cross calibration approaches, uncertainties, role of imaging spectrometry, model-based site characterization, and application to product validation.

  13. Evaluation of Algorithms for Compressing Hyperspectral Data

    NASA Technical Reports Server (NTRS)

    Cook, Sid; Harsanyi, Joseph; Faber, Vance

    2003-01-01

    With EO-1 Hyperion in orbit NASA is showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI spectral compression and Mapping Science (MSI) for JPEG 2000 spatial compression expertise, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor > 100, while retaining the necessary spectral and spatial fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our compression algorithms leverage commercial-off-the-shelf (COTS) spectral and spatial exploitation algorithms. We are currently in the process of evaluating these compression algorithms using statistical analysis and NASA scientists. We are also developing special purpose processors for executing these algorithms onboard a spacecraft.

  14. Evaluating sensor linearity of chosen infrared sensors

    NASA Astrophysics Data System (ADS)

    Walczykowski, P.; Orych, A.; Jenerowicz, A.; Karcz, P.

    2014-11-01

    The paper describes a series of experiments conducted as part of the IRAMSWater Project, the aim of which is to establish methodologies for detecting and identifying pollutants in water bodies using aerial imagery data. The main idea is based on the hypothesis, that it is possible to identify certain types of physical, biological and chemical pollutants based on their spectral reflectance characteristics. The knowledge of these spectral curves is then used to determine very narrow spectral bands in which greatest reflectance variations occur between these pollutants. A frame camera is then equipped with a band pass filter, which allows only the selected bandwidth to be registered. In order to obtain reliable reflectance data straight from the images, the team at the Military University of Technology had developed a methodology for determining the necessary acquisition parameters for the sensor (integration time and f-stop depending on the distance from the scene and it's illumination). This methodology however is based on the assumption, that the imaging sensors have a linear response. This paper shows the results of experiments used to evaluate this linearity.

  15. Comparative assessment of astigmatism-corrected Czerny-Turner imaging spectrometer using off-the-shelf optics

    NASA Astrophysics Data System (ADS)

    Yuan, Qun; Zhu, Dan; Chen, Yueyang; Guo, Zhenyan; Zuo, Chao; Gao, Zhishan

    2017-04-01

    We present the optical design of a Czerny-Turner imaging spectrometer for which astigmatism is corrected using off-the-shelf optics resulting in spectral resolution of 0.1 nm. The classic Czerny-Turner imaging spectrometer, consisting of a plane grating, two spherical mirrors, and a sensor with 10-μm pixels, was used as the benchmark. We comparatively assessed three configurations of the spectrometer that corrected astigmatism with divergent illumination of the grating, by adding a cylindrical lens, or by adding a cylindrical mirror. When configured with the added cylindrical lens, the imaging spectrometer with a point field of view (FOV) and a linear sensor achieved diffraction-limited performance over a broadband width of 400 nm centered at 800 nm, while the maximum allowable bandwidth was only 200 nm for the other two configurations. When configured with the added cylindrical mirror, the imaging spectrometer with a one-dimensional field of view (1D FOV) and an area sensor showed its superiority on imaging quality, spectral nonlinearity, as well as keystone over 100 nm bandwidth and 10 mm spatial extent along the entrance slit.

  16. Geologist's Field Assistant: Developing Image and Spectral Analyses Algorithms for Remote Science Exploration

    NASA Technical Reports Server (NTRS)

    Gulick, V. C.; Morris, R. L.; Bishop, J.; Gazis, P.; Alena, R.; Sierhuis, M.

    2002-01-01

    We are developing science analyses algorithms to interface with a Geologist's Field Assistant device to allow robotic or human remote explorers to better sense their surroundings during limited surface excursions. Our algorithms will interpret spectral and imaging data obtained by various sensors. Additional information is contained in the original extended abstract.

  17. Hyperspectral Image Analysis for Skin Tumor Detection

    NASA Astrophysics Data System (ADS)

    Kong, Seong G.; Park, Lae-Jeong

    This chapter presents hyperspectral imaging of fluorescence for nonin-vasive detection of tumorous tissue on mouse skin. Hyperspectral imaging sensors collect two-dimensional (2D) image data of an object in a number of narrow, adjacent spectral bands. This high-resolution measurement of spectral information reveals a continuous emission spectrum for each image pixel useful for skin tumor detection. The hyperspectral image data used in this study are fluorescence intensities of a mouse sample consisting of 21 spectral bands in the visible spectrum of wavelengths ranging from 440 to 640 nm. Fluorescence signals are measured using a laser excitation source with the center wavelength of 337 nm. An acousto-optic tunable filter is used to capture individual spectral band images at a 10-nm resolution. All spectral band images are spatially registered with the reference band image at 490 nm to obtain exact pixel correspondences by compensating the offsets caused during the image capture procedure. The support vector machines with polynomial kernel functions provide decision boundaries with a maximum separation margin to classify malignant tumor and normal tissue from the observed fluorescence spectral signatures for skin tumor detection.

  18. An approach to estimate spatial distribution of analyte within cells using spectrally-resolved fluorescence microscopy.

    PubMed

    Sharma, Dharmendar Kumar; Irfanullah, Mir; Basu, Santanu Kumar; Madhu, Sheri; De, Suman; Jadhav, Sameer; Ravikanth, Mangalampalli; Chowdhury, Arindam

    2017-01-18

    While fluorescence microscopy has become an essential tool amongst chemists and biologists for the detection of various analyte within cellular environments, non-uniform spatial distribution of sensors within cells often restricts extraction of reliable information on relative abundance of analytes in different subcellular regions. As an alternative to existing sensing methodologies such as ratiometric or FRET imaging, where relative proportion of analyte with respect to the sensor can be obtained within cells, we propose a methodology using spectrally-resolved fluorescence microscopy, via which both the relative abundance of sensor as well as their relative proportion with respect to the analyte can be simultaneously extracted for local subcellular regions. This method is exemplified using a BODIPY sensor, capable of detecting mercury ions within cellular environments, characterized by spectral blue-shift and concurrent enhancement of emission intensity. Spectral emission envelopes collected from sub-microscopic regions allowed us to compare the shift in transition energies as well as integrated emission intensities within various intracellular regions. Construction of a 2D scatter plot using spectral shifts and emission intensities, which depend on the relative amount of analyte with respect to sensor and the approximate local amounts of the probe, respectively, enabled qualitative extraction of relative abundance of analyte in various local regions within a single cell as well as amongst different cells. Although the comparisons remain semi-quantitative, this approach involving analysis of multiple spectral parameters opens up an alternative way to extract spatial distribution of analyte in heterogeneous systems. The proposed method would be especially relevant for fluorescent probes that undergo relatively nominal shift in transition energies compared to their emission bandwidths, which often restricts their usage for quantitative ratiometric imaging in cellular media due to strong cross-talk between energetically separated detection channels.

  19. An approach to estimate spatial distribution of analyte within cells using spectrally-resolved fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Sharma, Dharmendar Kumar; Irfanullah, Mir; Basu, Santanu Kumar; Madhu, Sheri; De, Suman; Jadhav, Sameer; Ravikanth, Mangalampalli; Chowdhury, Arindam

    2017-03-01

    While fluorescence microscopy has become an essential tool amongst chemists and biologists for the detection of various analyte within cellular environments, non-uniform spatial distribution of sensors within cells often restricts extraction of reliable information on relative abundance of analytes in different subcellular regions. As an alternative to existing sensing methodologies such as ratiometric or FRET imaging, where relative proportion of analyte with respect to the sensor can be obtained within cells, we propose a methodology using spectrally-resolved fluorescence microscopy, via which both the relative abundance of sensor as well as their relative proportion with respect to the analyte can be simultaneously extracted for local subcellular regions. This method is exemplified using a BODIPY sensor, capable of detecting mercury ions within cellular environments, characterized by spectral blue-shift and concurrent enhancement of emission intensity. Spectral emission envelopes collected from sub-microscopic regions allowed us to compare the shift in transition energies as well as integrated emission intensities within various intracellular regions. Construction of a 2D scatter plot using spectral shifts and emission intensities, which depend on the relative amount of analyte with respect to sensor and the approximate local amounts of the probe, respectively, enabled qualitative extraction of relative abundance of analyte in various local regions within a single cell as well as amongst different cells. Although the comparisons remain semi-quantitative, this approach involving analysis of multiple spectral parameters opens up an alternative way to extract spatial distribution of analyte in heterogeneous systems. The proposed method would be especially relevant for fluorescent probes that undergo relatively nominal shift in transition energies compared to their emission bandwidths, which often restricts their usage for quantitative ratiometric imaging in cellular media due to strong cross-talk between energetically separated detection channels. Dedicated to Professor Kankan Bhattacharyya.

  20. Use of EO-1 Hyperion data to calculate spectral band adjustment factors (SBAF) between the L7 ETM+ and Terra MODIS sensors

    USGS Publications Warehouse

    Chander, Gyanesh; Mishra, N.; Helder, Dennis L.; Aaron, David; Choi, T.; Angal, A.; Xiong, X.

    2010-01-01

    Different applications and technology developments in Earth observations necessarily require different spectral coverage. Thus, even for the spectral bands designed to look at the same region of the electromagnetic spectrum, the relative spectral responses (RSR) of different sensors may be different. In this study, spectral band adjustment factors (SBAF) are derived using hyperspectral Earth Observing-1 (EO-1) Hyperion measurements to adjust for the spectral band differences between the Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) and the Terra Moderate Resolution Imaging Spectroradiometer (MODIS) top-of-atmosphere (TOA) reflectance measurements from 2000 to 2009 over the pseudo-invariant Libya 4 reference standard test site.

  1. Compressive Coded-Aperture Multimodal Imaging Systems

    NASA Astrophysics Data System (ADS)

    Rueda-Chacon, Hoover F.

    Multimodal imaging refers to the framework of capturing images that span different physical domains such as space, spectrum, depth, time, polarization, and others. For instance, spectral images are modeled as 3D cubes with two spatial and one spectral coordinate. Three-dimensional cubes spanning just the space domain, are referred as depth volumes. Imaging cubes varying in time, spectra or depth, are referred as 4D-images. Nature itself spans different physical domains, thus imaging our real world demands capturing information in at least 6 different domains simultaneously, giving turn to 3D-spatial+spectral+polarized dynamic sequences. Conventional imaging devices, however, can capture dynamic sequences with up-to 3 spectral channels, in real-time, by the use of color sensors. Capturing multiple spectral channels require scanning methodologies, which demand long time. In general, to-date multimodal imaging requires a sequence of different imaging sensors, placed in tandem, to simultaneously capture the different physical properties of a scene. Then, different fusion techniques are employed to mix all the individual information into a single image. Therefore, new ways to efficiently capture more than 3 spectral channels of 3D time-varying spatial information, in a single or few sensors, are of high interest. Compressive spectral imaging (CSI) is an imaging framework that seeks to optimally capture spectral imagery (tens of spectral channels of 2D spatial information), using fewer measurements than that required by traditional sensing procedures which follows the Shannon-Nyquist sampling. Instead of capturing direct one-to-one representations of natural scenes, CSI systems acquire linear random projections of the scene and then solve an optimization algorithm to estimate the 3D spatio-spectral data cube by exploiting the theory of compressive sensing (CS). To date, the coding procedure in CSI has been realized through the use of ``block-unblock" coded apertures, commonly implemented as chrome-on-quartz photomasks. These apertures block or permit to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. In the first part, this thesis aims to expand the framework of CSI by replacing the traditional block-unblock coded apertures by patterned optical filter arrays, referred as ``color" coded apertures. These apertures are formed by tiny pixelated optical filters, which in turn, allow the input image to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed colored coded apertures are either synthesized through linear combinations of low-pass, high-pass and band-pass filters, paired with binary pattern ensembles realized by a digital-micromirror-device (DMD), or experimentally realized through thin-film color-patterned filter arrays. The optical forward model of the proposed CSI architectures will be presented along with the design and proof-of-concept implementations, which achieve noticeable improvements in the quality of the reconstructions compared with conventional block-unblock coded aperture-based CSI architectures. On another front, due to the rich information contained in the infrared spectrum as well as the depth domain, this thesis aims to explore multimodal imaging by extending the range sensitivity of current CSI systems to a dual-band visible+near-infrared spectral domain, and also, it proposes, for the first time, a new imaging device that captures simultaneously 4D data cubes (2D spatial+1D spectral+depth imaging) with as few as a single snapshot. Due to the snapshot advantage of this camera, video sequences are possible, thus enabling the joint capture of 5D imagery. It aims to create super-human sensing that will enable the perception of our world in new and exciting ways. With this, we intend to advance in the state of the art in compressive sensing systems to extract depth while accurately capturing spatial and spectral material properties. The applications of such a sensor are self-evident in fields such as computer/robotic vision because they would allow an artificial intelligence to make informed decisions about not only the location of objects within a scene but also their material properties.

  2. A compact bio-inspired visible/NIR imager for image-guided surgery (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Gao, Shengkui; Garcia, Missael; Edmiston, Chris; York, Timothy; Marinov, Radoslav; Mondal, Suman B.; Zhu, Nan; Sudlow, Gail P.; Akers, Walter J.; Margenthaler, Julie A.; Liang, Rongguang; Pepino, Marta; Achilefu, Samuel; Gruev, Viktor

    2016-03-01

    Inspired by the visual system of the morpho butterfly, we have designed, fabricated, tested and clinically translated an ultra-sensitive, light weight and compact imaging sensor capable of simultaneously capturing near infrared (NIR) and visible spectrum information. The visual system of the morpho butterfly combines photosensitive cells with spectral filters at the receptor level. The spectral filters are realized by alternating layers of high and low dielectric constant, such as air and cytoplasm. We have successfully mimicked this concept by integrating pixelated spectral filters, realized by alternating silicon dioxide and silicon nitrate layers, with an array of CCD detectors. There are four different types of pixelated spectral filters in the imaging plane: red, green, blue and NIR. The high optical density (OD) of all spectral filters (OD>4) allow for efficient rejections of photons from unwanted bands. The single imaging chip weighs 20 grams with form factor of 5mm by 5mm. The imaging camera is integrated with a goggle display system. A tumor targeted agent, LS301, is used to identify all spontaneous tumors in a transgenic PyMT murine model of breast cancer. The imaging system achieved sensitivity of 98% and selectivity of 95%. We also used our imaging sensor to locate sentinel lymph nodes (SLNs) in patients with breast cancer using indocyanine green tracer. The surgeon was able to identify 100% of SLNs when using our bio-inspired imaging system, compared to 93% when using information from the lymphotropic dye and 96% when using information from the radioactive tracer.

  3. Satellite image fusion based on principal component analysis and high-pass filtering.

    PubMed

    Metwalli, Mohamed R; Nasr, Ayman H; Allah, Osama S Farag; El-Rabaie, S; Abd El-Samie, Fathi E

    2010-06-01

    This paper presents an integrated method for the fusion of satellite images. Several commercial earth observation satellites carry dual-resolution sensors, which provide high spatial resolution or simply high-resolution (HR) panchromatic (pan) images and low-resolution (LR) multi-spectral (MS) images. Image fusion methods are therefore required to integrate a high-spectral-resolution MS image with a high-spatial-resolution pan image to produce a pan-sharpened image with high spectral and spatial resolutions. Some image fusion methods such as the intensity, hue, and saturation (IHS) method, the principal component analysis (PCA) method, and the Brovey transform (BT) method provide HR MS images, but with low spectral quality. Another family of image fusion methods, such as the high-pass-filtering (HPF) method, operates on the basis of the injection of high frequency components from the HR pan image into the MS image. This family of methods provides less spectral distortion. In this paper, we propose the integration of the PCA method and the HPF method to provide a pan-sharpened MS image with superior spatial resolution and less spectral distortion. The experimental results show that the proposed fusion method retains the spectral characteristics of the MS image and, at the same time, improves the spatial resolution of the pan-sharpened image.

  4. Study on Remote Sensing Image Characteristics of Ecological Land: Case Study of Original Ecological Land in the Yellow River Delta

    NASA Astrophysics Data System (ADS)

    An, G. Q.

    2018-04-01

    Takes the Yellow River Delta as an example, this paper studies the characteristics of remote sensing imagery with dominant ecological functional land use types, compares the advantages and disadvantages of different image in interpreting ecological land use, and uses research results to analyse the changing trend of ecological land in the study area in the past 30 years. The main methods include multi-period, different sensor images and different seasonal spectral curves, vegetation index, GIS and data analysis methods. The results show that the main ecological land in the Yellow River Delta included coastal beaches, saline-alkaline lands, and water bodies. These lands have relatively distinct spectral and texture features. The spectral features along the beach show characteristics of absorption in the green band and reflection in the red band. This feature is less affected by the acquisition year, season, and sensor type. Saline-alkali land due to the influence of some saline-alkaline-tolerant plants such as alkali tent, Tamarix and other vegetation, the spectral characteristics have a certain seasonal changes, winter and spring NDVI index is less than the summer and autumn vegetation index. The spectral characteristics of a water body generally decrease rapidly with increasing wavelength, and the reflectance in the red band increases with increasing sediment concentration. In conclusion, according to the spectral characteristics and image texture features of the ecological land in the Yellow River Delta, the accuracy of image interpretation of such ecological land can be improved.

  5. A practical approach to spectral calibration of short wavelength infrared hyper-spectral imaging systems

    NASA Astrophysics Data System (ADS)

    Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan

    2010-02-01

    Near-infrared spectroscopy is a promising, rapidly developing, reliable and noninvasive technique, used extensively in the biomedicine and in pharmaceutical industry. With the introduction of acousto-optic tunable filters (AOTF) and highly sensitive InGaAs focal plane sensor arrays, real-time high resolution hyper-spectral imaging has become feasible for a number of new biomedical in vivo applications. However, due to the specificity of the AOTF technology and lack of spectral calibration standardization, maintaining long-term stability and compatibility of the acquired hyper-spectral images across different systems is still a challenging problem. Efficiently solving both is essential as the majority of methods for analysis of hyper-spectral images relay on a priori knowledge extracted from large spectral databases, serving as the basis for reliable qualitative or quantitative analysis of various biological samples. In this study, we propose and evaluate fast and reliable spectral calibration of hyper-spectral imaging systems in the short wavelength infrared spectral region. The proposed spectral calibration method is based on light sources or materials, exhibiting distinct spectral features, which enable robust non-rigid registration of the acquired spectra. The calibration accounts for all of the components of a typical hyper-spectral imaging system such as AOTF, light source, lens and optical fibers. The obtained results indicated that practical, fast and reliable spectral calibration of hyper-spectral imaging systems is possible, thereby assuring long-term stability and inter-system compatibility of the acquired hyper-spectral images.

  6. Electron-bombarded CCD detectors for ultraviolet atmospheric remote sensing

    NASA Technical Reports Server (NTRS)

    Carruthers, G. R.; Opal, C. B.

    1983-01-01

    Electronic image sensors based on charge coupled devices operated in electron-bombarded mode, yielding real-time, remote-readout, photon-limited UV imaging capability are being developed. The sensors also incorporate fast-focal-ratio Schmidt optics and opaque photocathodes, giving nearly the ultimate possible diffuse-source sensitivity. They can be used for direct imagery of atmospheric emission phenomena, and for imaging spectrography with moderate spatial and spectral resolution. The current state of instrument development, laboratory results, planned future developments and proposed applications of the sensors in space flight instrumentation is described.

  7. Comparison of NDVI fields obtained from different remote sensors

    NASA Astrophysics Data System (ADS)

    Escribano Rodriguez, Juan; Alonso, Carmelo; Tarquis, Ana Maria; Benito, Rosa Maria; Hernandez Díaz-Ambrona, Carlos

    2013-04-01

    Satellite image data have become an important source of information for monitoring vegetation and mapping land cover at several scales. Beside this, the distribution and phenology of vegetation is largely associated with climate, terrain characteristics and human activity. Various vegetation indices have been developed for qualitative and quantitative assessment of vegetation using remote spectral measurements. In particular, sensors with spectral bands in the red (RED) and near-infrared (NIR) lend themselves well to vegetation monitoring and based on them [(NIR - RED) / (NIR + RED)] Normalized Difference Vegetation Index (NDVI) has been widespread used. Given that the characteristics of spectral bands in RED and NIR vary distinctly from sensor to sensor, NDVI values based on data from different instruments will not be directly comparable. The spatial resolution also varies significantly between sensors, as well as within a given scene in the case of wide-angle and oblique sensors. As a result, NDVI values will vary according to combinations of the heterogeneity and scale of terrestrial surfaces and pixel footprint sizes. Therefore, the question arises as to the impact of differences in spectral and spatial resolutions on vegetation indices like the NDVI and their interpretation as a drought index. During 2012 three locations (at Salamanca, Granada and Córdoba) were selected and a periodic pasture monitoring and botanic composition were achieved. Daily precipitation, temperature and monthly soil water content were measurement as well as fresh and dry pasture weight. At the same time, remote sensing images were capture by DEIMOS-1 and MODIS of the chosen places. DEIMOS-1 is based on the concept Microsat-100 from Surrey. It is conceived for obtaining Earth images with a good enough resolution to study the terrestrial vegetation cover (20x20 m), although with a great range of visual field (600 km) in order to obtain those images with high temporal resolution and at a reduced cost. By contranst, MODIS images present a much lower spatial resolution (500x500 m). The aim of this study is to establish a comparison between two different sensors in their NDVI values at different spatial resolutions. Acknowledgements. This work was partially supported by ENESA under project P10 0220C-823. Funding provided by Spanish Ministerio de Ciencia e Innovación (MICINN) through project no. MTM2009-14621 and i-MATH No. CSD2006-00032 is greatly appreciated.

  8. Measurement of wave-front aberration in a small telescope remote imaging system using scene-based wave-front sensing

    DOEpatents

    Poyneer, Lisa A; Bauman, Brian J

    2015-03-31

    Reference-free compensated imaging makes an estimation of the Fourier phase of a series of images of a target. The Fourier magnitude of the series of images is obtained by dividing the power spectral density of the series of images by an estimate of the power spectral density of atmospheric turbulence from a series of scene based wave front sensor (SBWFS) measurements of the target. A high-resolution image of the target is recovered from the Fourier phase and the Fourier magnitude.

  9. Characterisation methods for the hyperspectral sensor HySpex at DLR's calibration home base

    NASA Astrophysics Data System (ADS)

    Baumgartner, Andreas; Gege, Peter; Köhler, Claas; Lenhard, Karim; Schwarzmaier, Thomas

    2012-09-01

    The German Aerospace Center's (DLR) Remote Sensing Technology Institute (IMF) operates a laboratory for the characterisation of imaging spectrometers. Originally designed as Calibration Home Base (CHB) for the imaging spectrometer APEX, the laboratory can be used to characterise nearly every airborne hyperspectral system. Characterisation methods will be demonstrated exemplarily with HySpex, an airborne imaging spectrometer system from Norsk Elektro Optikks A/S (NEO). Consisting of two separate devices (VNIR-1600 and SWIR-320me) the setup covers the spectral range from 400 nm to 2500 nm. Both airborne sensors have been characterised at NEO. This includes measurement of spectral and spatial resolution and misregistration, polarisation sensitivity, signal to noise ratios and the radiometric response. The same parameters have been examined at the CHB and were used to validate the NEO measurements. Additionally, the line spread functions (LSF) in across and along track direction and the spectral response functions (SRF) for certain detector pixels were measured. The high degree of lab automation allows the determination of the SRFs and LSFs for a large amount of sampling points. Despite this, the measurement of these functions for every detector element would be too time-consuming as typical detectors have 105 elements. But with enough sampling points it is possible to interpolate the attributes of the remaining pixels. The knowledge of these properties for every detector element allows the quantification of spectral and spatial misregistration (smile and keystone) and a better calibration of airborne data. Further laboratory measurements are used to validate the models for the spectral and spatial properties of the imaging spectrometers. Compared to the future German spaceborne hyperspectral Imager EnMAP, the HySpex sensors have the same or higher spectral and spatial resolution. Therefore, airborne data will be used to prepare for and validate the spaceborne system's data.

  10. Higher resolution satellite remote sensing and the impact on image mapping

    USGS Publications Warehouse

    Watkins, Allen H.; Thormodsgard, June M.

    1987-01-01

    Recent advances in spatial, spectral, and temporal resolution of civil land remote sensing satellite data are presenting new opportunities for image mapping applications. The U.S. Geological Survey's experimental satellite image mapping program is evolving toward larger scale image map products with increased information content as a result of improved image processing techniques and increased resolution. Thematic mapper data are being used to produce experimental image maps at 1:100,000 scale that meet established U.S. and European map accuracy standards. Availability of high quality, cloud-free, 30-meter ground resolution multispectral data from the Landsat thematic mapper sensor, along with 10-meter ground resolution panchromatic and 20-meter ground resolution multispectral data from the recently launched French SPOT satellite, present new cartographic and image processing challenges.The need to fully exploit these higher resolution data increases the complexity of processing the images into large-scale image maps. The removal of radiometric artifacts and noise prior to geometric correction can be accomplished by using a variety of image processing filters and transforms. Sensor modeling and image restoration techniques allow maximum retention of spatial and radiometric information. An optimum combination of spectral information and spatial resolution can be obtained by merging different sensor types. These processing techniques are discussed and examples are presented.

  11. Results of ACTIM: an EDA study on spectral laser imaging

    NASA Astrophysics Data System (ADS)

    Hamoir, Dominique; Hespel, Laurent; Déliot, Philippe; Boucher, Yannick; Steinvall, Ove; Ahlberg, Jörgen; Larsson, Hakan; Letalick, Dietmar; Lutzmann, Peter; Repasi, Endre; Ritt, Gunnar

    2011-11-01

    The European Defence Agency (EDA) launched the Active Imaging (ACTIM) study to investigate the potential of active imaging, especially that of spectral laser imaging. The work included a literature survey, the identification of promising military applications, system analyses, a roadmap and recommendations. Passive multi- and hyper-spectral imaging allows discriminating between materials. But the measured radiance in the sensor is difficult to relate to spectral reflectance due to the dependence on e.g. solar angle, clouds, shadows... In turn, active spectral imaging offers a complete control of the illumination, thus eliminating these effects. In addition it allows observing details at long ranges, seeing through degraded atmospheric conditions, penetrating obscurants (foliage, camouflage...) or retrieving polarization information. When 3D, it is suited to producing numerical terrain models and to performing geometry-based identification. Hence fusing the knowledge of ladar and passive spectral imaging will result in new capabilities. We have identified three main application areas for active imaging, and for spectral active imaging in particular: (1) long range observation for identification, (2) mid-range mapping for reconnaissance, (3) shorter range perception for threat detection. We present the system analyses that have been performed for confirming the interests, limitations and requirements of spectral active imaging in these three prioritized applications.

  12. Demonstration of Airborne Wide Area Assessment Technologies at Pueblo Precision Bombing Ranges, Colorado. Hyperspectral Imaging, Version 2.0

    DTIC Science & Technology

    2007-09-27

    the spatial and spectral resolution ...variety of geological and vegetation mapping efforts, the Hymap sensor offered the best available combination of spectral and spatial resolution , signal... The limitations of the technology currently relate to spatial and spectral resolution and geo- correction accuracy. Secondly, HSI datasets

  13. Wedge imaging spectrometer: application to drug and pollution law enforcement

    NASA Astrophysics Data System (ADS)

    Elerding, George T.; Thunen, John G.; Woody, Loren M.

    1991-08-01

    The Wedge Imaging Spectrometer (WIS) represents a novel implementation of an imaging spectrometer sensor that is compact and rugged and, therefore, suitable for use in drug interdiction and pollution monitoring activities. With performance characteristics equal to comparable conventional imaging spectrometers, it would be capable of detecting and identifying primary and secondary indicators of drug activities and pollution events. In the design, a linear wedge filter is mated to an area array of detectors to achieve two-dimensional sampling of the combined spatial/spectral information passed by the filter. As a result, the need for complex and delicate fore optics is avoided, and the size and weight of the instrument are approximately 50% that of comparable sensors. Spectral bandwidths can be controlled to provide relatively narrow individual bandwidths over a broad spectrum, including all visible and infrared wavelengths. This sensor concept has been under development at the Hughes Aircraft Co. Santa Barbara Research Center (SBRC), and hardware exists in the form of a brassboard prototype. This prototype provides 64 spectral bands over the visible and near infrared region (0.4 to 1.0 micrometers ). Implementation issues have been examined, and plans have been formulated for packaging the sensor into a test-bed aircraft for demonstration of capabilities. Two specific areas of utility to the drug interdiction problem are isolated: (1) detection and classification of narcotic crop growth areas and (2) identification of coca processing sites, cued by the results of broad-area survey and collateral information. Vegetation stress and change-detection processing may also be useful in detecting active from dormant airfields. For pollution monitoring, a WIS sensor could provide data with fine spectral and spatial resolution over suspect areas. On-board or ground processing of the data would isolate the presence of polluting effluents, effects on vegetation caused by airborne or other pollutants, or anomalous ground conditions indicative of buried or dumped toxic materials.

  14. Multispectral simulation environment for modeling low-light-level sensor systems

    NASA Astrophysics Data System (ADS)

    Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.

    1998-11-01

    Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low- light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.

  15. Automatic Registration of GF4 Pms: a High Resolution Multi-Spectral Sensor on Board a Satellite on Geostationary Orbit

    NASA Astrophysics Data System (ADS)

    Gao, M.; Li, J.

    2018-04-01

    Geometric correction is an important preprocessing process in the application of GF4 PMS image. The method of geometric correction that is based on the manual selection of geometric control points is time-consuming and laborious. The more common method, based on a reference image, is automatic image registration. This method involves several steps and parameters. For the multi-spectral sensor GF4 PMS, it is necessary for us to identify the best combination of parameters and steps. This study mainly focuses on the following issues: necessity of Rational Polynomial Coefficients (RPC) correction before automatic registration, base band in the automatic registration and configuration of GF4 PMS spatial resolution.

  16. Spatial, Temporal and Spectral Satellite Image Fusion via Sparse Representation

    NASA Astrophysics Data System (ADS)

    Song, Huihui

    Remote sensing provides good measurements for monitoring and further analyzing the climate change, dynamics of ecosystem, and human activities in global or regional scales. Over the past two decades, the number of launched satellite sensors has been increasing with the development of aerospace technologies and the growing requirements on remote sensing data in a vast amount of application fields. However, a key technological challenge confronting these sensors is that they tradeoff between spatial resolution and other properties, including temporal resolution, spectral resolution, swath width, etc., due to the limitations of hardware technology and budget constraints. To increase the spatial resolution of data with other good properties, one possible cost-effective solution is to explore data integration methods that can fuse multi-resolution data from multiple sensors, thereby enhancing the application capabilities of available remote sensing data. In this thesis, we propose to fuse the spatial resolution with temporal resolution and spectral resolution, respectively, based on sparse representation theory. Taking the study case of Landsat ETM+ (with spatial resolution of 30m and temporal resolution of 16 days) and MODIS (with spatial resolution of 250m ~ 1km and daily temporal resolution) reflectance, we propose two spatial-temporal fusion methods to combine the fine spatial information of Landsat image and the daily temporal resolution of MODIS image. Motivated by that the images from these two sensors are comparable on corresponding bands, we propose to link their spatial information on available Landsat- MODIS image pair (captured on prior date) and then predict the Landsat image from the MODIS counterpart on prediction date. To well-learn the spatial details from the prior images, we use a redundant dictionary to extract the basic representation atoms for both Landsat and MODIS images based on sparse representation. Under the scenario of two prior Landsat-MODIS image pairs, we build the corresponding relationship between the difference images of MODIS and ETM+ by training a low- and high-resolution dictionary pair from the given prior image pairs. In the second scenario, i.e., only one Landsat- MODIS image pair being available, we directly correlate MODIS and ETM+ data through an image degradation model. Then, the fusion stage is achieved by super-resolving the MODIS image combining the high-pass modulation in a two-layer fusion framework. Remarkably, the proposed spatial-temporal fusion methods form a unified framework for blending remote sensing images with phenology change or land-cover-type change. Based on the proposed spatial-temporal fusion models, we propose to monitor the land use/land cover changes in Shenzhen, China. As a fast-growing city, Shenzhen faces the problem of detecting the rapid changes for both rational city planning and sustainable development. However, the cloudy and rainy weather in region Shenzhen located makes the capturing circle of high-quality satellite images longer than their normal revisit periods. Spatial-temporal fusion methods are capable to tackle this problem by improving the spatial resolution of images with coarse spatial resolution but frequent temporal coverage, thereby making the detection of rapid changes possible. On two Landsat-MODIS datasets with annual and monthly changes, respectively, we apply the proposed spatial-temporal fusion methods to the task of multiple change detection. Afterward, we propose a novel spatial and spectral fusion method for satellite multispectral and hyperspectral (or high-spectral) images based on dictionary-pair learning and sparse non-negative matrix factorization. By combining the spectral information from hyperspectral image, which is characterized by low spatial resolution but high spectral resolution and abbreviated as LSHS, and the spatial information from multispectral image, which is featured by high spatial resolution but low spectral resolution and abbreviated as HSLS, this method aims to generate the fused data with both high spatial and high spectral resolutions. Motivated by the observation that each hyperspectral pixel can be represented by a linear combination of a few endmembers, this method first extracts the spectral bases of LSHS and HSLS images by making full use of the rich spectral information in LSHS data. The spectral bases of these two categories data then formulate a dictionary-pair due to their correspondence in representing each pixel spectra of LSHS data and HSLS data, respectively. Subsequently, the LSHS image is spatially unmixed by representing the HSLS image with respect to the corresponding learned dictionary to derive its representation coefficients. Combining the spectral bases of LSHS data and the representation coefficients of HSLS data, we finally derive the fused data characterized by the spectral resolution of LSHS data and the spatial resolution of HSLS data.

  17. Snapshot spectral and polarimetric imaging; target identification with multispectral video

    NASA Astrophysics Data System (ADS)

    Bartlett, Brent D.; Rodriguez, Mikel D.

    2013-05-01

    As the number of pixels continue to grow in consumer and scientific imaging devices, it has become feasible to collect the incident light field. In this paper, an imaging device developed around light field imaging is used to collect multispectral and polarimetric imagery in a snapshot fashion. The sensor is described and a video data set is shown highlighting the advantage of snapshot spectral imaging. Several novel computer vision approaches are applied to the video cubes to perform scene characterization and target identification. It is shown how the addition of spectral and polarimetric data to the video stream allows for multi-target identification and tracking not possible with traditional RGB video collection.

  18. High Dynamic Range Spectral Imaging Pipeline For Multispectral Filter Array Cameras.

    PubMed

    Lapray, Pierre-Jean; Thomas, Jean-Baptiste; Gouton, Pierre

    2017-06-03

    Spectral filter arrays imaging exhibits a strong similarity with color filter arrays. This permits us to embed this technology in practical vision systems with little adaptation of the existing solutions. In this communication, we define an imaging pipeline that permits high dynamic range (HDR)-spectral imaging, which is extended from color filter arrays. We propose an implementation of this pipeline on a prototype sensor and evaluate the quality of our implementation results on real data with objective metrics and visual examples. We demonstrate that we reduce noise, and, in particular we solve the problem of noise generated by the lack of energy balance. Data are provided to the community in an image database for further research.

  19. Direct Reflectance Measurements from Drones: Sensor Absolute Radiometric Calibration and System Tests for Forest Reflectance Characterization.

    PubMed

    Hakala, Teemu; Markelin, Lauri; Honkavaara, Eija; Scott, Barry; Theocharous, Theo; Nevalainen, Olli; Näsi, Roope; Suomalainen, Juha; Viljanen, Niko; Greenwell, Claire; Fox, Nigel

    2018-05-03

    Drone-based remote sensing has evolved rapidly in recent years. Miniaturized hyperspectral imaging sensors are becoming more common as they provide more abundant information of the object compared to traditional cameras. Reflectance is a physically defined object property and therefore often preferred output of the remote sensing data capture to be used in the further processes. Absolute calibration of the sensor provides a possibility for physical modelling of the imaging process and enables efficient procedures for reflectance correction. Our objective is to develop a method for direct reflectance measurements for drone-based remote sensing. It is based on an imaging spectrometer and irradiance spectrometer. This approach is highly attractive for many practical applications as it does not require in situ reflectance panels for converting the sensor radiance to ground reflectance factors. We performed SI-traceable spectral and radiance calibration of a tuneable Fabry-Pérot Interferometer -based (FPI) hyperspectral camera at the National Physical Laboratory NPL (Teddington, UK). The camera represents novel technology by collecting 2D format hyperspectral image cubes using time sequential spectral scanning principle. The radiance accuracy of different channels varied between ±4% when evaluated using independent test data, and linearity of the camera response was on average 0.9994. The spectral response calibration showed side peaks on several channels that were due to the multiple orders of interference of the FPI. The drone-based direct reflectance measurement system showed promising results with imagery collected over Wytham Forest (Oxford, UK).

  20. Direct Reflectance Measurements from Drones: Sensor Absolute Radiometric Calibration and System Tests for Forest Reflectance Characterization

    PubMed Central

    Hakala, Teemu; Scott, Barry; Theocharous, Theo; Näsi, Roope; Suomalainen, Juha; Greenwell, Claire; Fox, Nigel

    2018-01-01

    Drone-based remote sensing has evolved rapidly in recent years. Miniaturized hyperspectral imaging sensors are becoming more common as they provide more abundant information of the object compared to traditional cameras. Reflectance is a physically defined object property and therefore often preferred output of the remote sensing data capture to be used in the further processes. Absolute calibration of the sensor provides a possibility for physical modelling of the imaging process and enables efficient procedures for reflectance correction. Our objective is to develop a method for direct reflectance measurements for drone-based remote sensing. It is based on an imaging spectrometer and irradiance spectrometer. This approach is highly attractive for many practical applications as it does not require in situ reflectance panels for converting the sensor radiance to ground reflectance factors. We performed SI-traceable spectral and radiance calibration of a tuneable Fabry-Pérot Interferometer -based (FPI) hyperspectral camera at the National Physical Laboratory NPL (Teddington, UK). The camera represents novel technology by collecting 2D format hyperspectral image cubes using time sequential spectral scanning principle. The radiance accuracy of different channels varied between ±4% when evaluated using independent test data, and linearity of the camera response was on average 0.9994. The spectral response calibration showed side peaks on several channels that were due to the multiple orders of interference of the FPI. The drone-based direct reflectance measurement system showed promising results with imagery collected over Wytham Forest (Oxford, UK). PMID:29751560

  1. Low SWaP multispectral sensors using dichroic filter arrays

    NASA Astrophysics Data System (ADS)

    Dougherty, John; Varghese, Ron

    2015-06-01

    The benefits of multispectral imaging are well established in a variety of applications including remote sensing, authentication, satellite and aerial surveillance, machine vision, biomedical, and other scientific and industrial uses. However, many of the potential solutions require more compact, robust, and cost-effective cameras to realize these benefits. The next generation of multispectral sensors and cameras needs to deliver improvements in size, weight, power, portability, and spectral band customization to support widespread deployment for a variety of purpose-built aerial, unmanned, and scientific applications. A novel implementation uses micro-patterning of dichroic filters1 into Bayer and custom mosaics, enabling true real-time multispectral imaging with simultaneous multi-band image acquisition. Consistent with color image processing, individual spectral channels are de-mosaiced with each channel providing an image of the field of view. This approach can be implemented across a variety of wavelength ranges and on a variety of detector types including linear, area, silicon, and InGaAs. This dichroic filter array approach can also reduce payloads and increase range for unmanned systems, with the capability to support both handheld and autonomous systems. Recent examples and results of 4 band RGB + NIR dichroic filter arrays in multispectral cameras are discussed. Benefits and tradeoffs of multispectral sensors using dichroic filter arrays are compared with alternative approaches - including their passivity, spectral range, customization options, and scalable production.

  2. TMA optics for HISUI HSS and MSS imagers

    NASA Astrophysics Data System (ADS)

    Rodolfo, J.; Geyl, R.; Leplan, H.; Ruch, E.

    2017-11-01

    Sagem is presently working on a new project for the Japanese HISUI instrument made from a Hyper Spectral Sensor and a Multi Spectral Sensor, both including a Three Mirror Anastigmat (TMA) main optics. Mirrors are made from Zerodur from Schott but also from NTSIC, the New Technology Silicon Carbide developed in Japan. This report is also the opportunity to show to the community Sagem recent progress in precision TMA optics polishing and alignment.

  3. Efficient integration of spectral features for vehicle tracking utilizing an adaptive sensor

    NASA Astrophysics Data System (ADS)

    Uzkent, Burak; Hoffman, Matthew J.; Vodacek, Anthony

    2015-03-01

    Object tracking in urban environments is an important and challenging problem that is traditionally tackled using visible and near infrared wavelengths. By inserting extended data such as spectral features of the objects one can improve the reliability of the identification process. However, huge increase in data created by hyperspectral imaging is usually prohibitive. To overcome the complexity problem, we propose a persistent air-to-ground target tracking system inspired by a state-of-the-art, adaptive, multi-modal sensor. The adaptive sensor is capable of providing panchromatic images as well as the spectra of desired pixels. This addresses the data challenge of hyperspectral tracking by only recording spectral data as needed. Spectral likelihoods are integrated into a data association algorithm in a Bayesian fashion to minimize the likelihood of misidentification. A framework for controlling spectral data collection is developed by incorporating motion segmentation information and prior information from a Gaussian Sum filter (GSF) movement predictions from a multi-model forecasting set. An intersection mask of the surveillance area is extracted from OpenStreetMap source and incorporated into the tracking algorithm to perform online refinement of multiple model set. The proposed system is tested using challenging and realistic scenarios generated in an adverse environment.

  4. Real-time detection of natural objects using AM-coded spectral matching imager

    NASA Astrophysics Data System (ADS)

    Kimachi, Akira

    2004-12-01

    This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.

  5. Real-time detection of natural objects using AM-coded spectral matching imager

    NASA Astrophysics Data System (ADS)

    Kimachi, Akira

    2005-01-01

    This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.

  6. Solid state image sensing arrays

    NASA Technical Reports Server (NTRS)

    Sadasiv, G.

    1972-01-01

    The fabrication of a photodiode transistor image sensor array in silicon, and tests on individual elements of the array are described along with design for a scanning system for an image sensor array. The spectral response of p-n junctions was used as a technique for studying the optical-absorption edge in silicon. Heterojunction structures of Sb2S3- Si were fabricated and a system for measuring C-V curves on MOS structures was built.

  7. Onboard Processor for Compressing HSI Data

    NASA Technical Reports Server (NTRS)

    Cook, Sid; Harsanyi, Joe; Day, John H. (Technical Monitor)

    2002-01-01

    With EO-1 Hyperion and MightySat in orbit NASA and the DoD are showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor greater than 100, while retaining the necessary spectral fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our initial spectral compression experiments leverage commercial-off-the-shelf (COTS) spectral exploitation algorithms for segmentation, material identification and spectral compression that ASIT has developed. ASIT will also support the modification and integration of this COTS software into the OBP. Other commercially available COTS software for spatial compression will also be employed as part of the overall compression processing sequence. Over the next year elements of a high-performance reconfigurable OBP will be developed to implement proven preprocessing steps that distill the HSI data stream in both spectral and spatial dimensions. The system will intelligently reduce the volume of data that must be stored, transmitted to the ground, and processed while minimizing the loss of information.

  8. Enhancing Spatial Resolution of Remotely Sensed Imagery Using Deep Learning

    NASA Astrophysics Data System (ADS)

    Beck, J. M.; Bridges, S.; Collins, C.; Rushing, J.; Graves, S. J.

    2017-12-01

    Researchers at the Information Technology and Systems Center at the University of Alabama in Huntsville are using Deep Learning with Convolutional Neural Networks (CNNs) to develop a method for enhancing the spatial resolutions of moderate resolution (10-60m) multispectral satellite imagery. This enhancement will effectively match the resolutions of imagery from multiple sensors to provide increased global temporal-spatial coverage for a variety of Earth science products. Our research is centered on using Deep Learning for automatically generating transformations for increasing the spatial resolution of remotely sensed images with different spatial, spectral, and temporal resolutions. One of the most important steps in using images from multiple sensors is to transform the different image layers into the same spatial resolution, preferably the highest spatial resolution, without compromising the spectral information. Recent advances in Deep Learning have shown that CNNs can be used to effectively and efficiently upscale or enhance the spatial resolution of multispectral images with the use of an auxiliary data source such as a high spatial resolution panchromatic image. In contrast, we are using both the spatial and spectral details inherent in low spatial resolution multispectral images for image enhancement without the use of a panchromatic image. This presentation will discuss how this technology will benefit many Earth Science applications that use remotely sensed images with moderate spatial resolutions.

  9. Image quality measures to assess hyperspectral compression techniques

    NASA Astrophysics Data System (ADS)

    Lurie, Joan B.; Evans, Bruce W.; Ringer, Brian; Yeates, Mathew

    1994-12-01

    The term 'multispectral' is used to describe imagery with anywhere from three to about 20 bands of data. The images acquired by Landsat and similar earth sensing satellites including the French Spot platform are typical examples of multispectral data sets. Applications range from crop observation and yield estimation, to forestry, to sensing of the environment. The wave bands typically range from the visible to thermal infrared and are fractions of a micron wide. They may or may not be contiguous. Thus each pixel will have several spectral intensities associated with it but detailed spectra are not obtained. The term 'hyperspectral' is typically used for spectral data encompassing hundreds of samples of a spectrum. Hyperspectral, electro-optical sensors typically operate in the visible and near infrared bands. Their characteristic property is the ability to resolve a large number (typically hundreds) of contiguous spectral bands, thus producing a detailed profile of the electromagnetic spectrum. Like multispectral sensors, recently developed hyperspectral sensors are often also imaging sensors, measuring spectral over a two dimensional spatial array of picture elements of pixels. The resulting data is thus inherently three dimensional - an array of samples in which two dimensions correspond to spatial position and the third to wavelength. The data sets, commonly referred to as image cubes or datacubes (although technically they are often rectangular solids), are very rich in information but quickly become unwieldy in size, generating formidable torrents of data. Both spaceborne and airborne hyperspectral cameras exist and are in use today. The data is unique in its ability to provide high spatial and spectral resolution simultaneously, and shows great promise in both military and civilian applications. A data analysis system has been built at TRW under a series of Internal Research and Development projects. This development has been prompted by the business opportunities, by the series of instruments built here and by the availability of data from other instruments. The products of the processing system has been used to process data produced by TRW sensors and other instruments. Figure 1 provides an overview of the TRW hyperspectral collection, data handling and exploitation capability. The Analysis and Exploitation functions deal with the digitized image cubes. The analysis system was designed to handle various types of data but the emphasis was on the data acquired by the TRW instruments.

  10. Variational optical flow estimation for images with spectral and photometric sensor diversity

    NASA Astrophysics Data System (ADS)

    Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin

    2015-03-01

    Motion estimation of objects in image sequences is an essential computer vision task. To this end, optical flow methods compute pixel-level motion, with the purpose of providing low-level input to higher-level algorithms and applications. Robust flow estimation is crucial for the success of applications, which in turn depends on the quality of the captured image data. This work explores the use of sensor diversity in the image data within a framework for variational optical flow. In particular, a custom image sensor setup intended for vehicle applications is tested. Experimental results demonstrate the improved flow estimation performance when IR sensitivity or flash illumination is added to the system.

  11. Multispectral atmospheric mapping sensor of mesoscale water vapor features

    NASA Technical Reports Server (NTRS)

    Menzel, P.; Jedlovec, G.; Wilson, G.; Atkinson, R.; Smith, W.

    1985-01-01

    The Multispectral atmospheric mapping sensor was checked out for specified spectral response and detector noise performance in the eight visible and three infrared (6.7, 11.2, 12.7 micron) spectral bands. A calibration algorithm was implemented for the infrared detectors. Engineering checkout flights on board the ER-2 produced imagery at 50 m resolution in which water vapor features in the 6.7 micron spectral band are most striking. These images were analyzed on the Man computer Interactive Data Access System (McIDAS). Ground truth and ancillary data was accessed to verify the calibration.

  12. The Portable Remote Imaging Spectrometer (PRISM) Coastal Ocean Sensor

    NASA Technical Reports Server (NTRS)

    Mouroulis, Pantazis; VanGorp, Byron E.; Green, Robert O.; Eastwppd, Michael; Wilson, Daniel W.; Richardson, Brandon; Dierssen, Heidi

    2012-01-01

    PRISM is an airborne pushbroom imaging spectrometer intended to address the needs of airborne coastal ocean science research. Its critical characteristics are high throughput and signal-to-noise ratio, high uniformity of response to reduce spectral artifacts, and low polarization sensitivity. We give a brief overview of the instrument and results from laboratory calibration measurements regarding the spatial, spectral, radiometric and polarization characteristics.

  13. Time Series of Images to Improve Tree Species Classification

    NASA Astrophysics Data System (ADS)

    Miyoshi, G. T.; Imai, N. N.; de Moraes, M. V. A.; Tommaselli, A. M. G.; Näsi, R.

    2017-10-01

    Tree species classification provides valuable information to forest monitoring and management. The high floristic variation of the tree species appears as a challenging issue in the tree species classification because the vegetation characteristics changes according to the season. To help to monitor this complex environment, the imaging spectroscopy has been largely applied since the development of miniaturized sensors attached to Unmanned Aerial Vehicles (UAV). Considering the seasonal changes in forests and the higher spectral and spatial resolution acquired with sensors attached to UAV, we present the use of time series of images to classify four tree species. The study area is an Atlantic Forest area located in the western part of São Paulo State. Images were acquired in August 2015 and August 2016, generating three data sets of images: only with the image spectra of 2015; only with the image spectra of 2016; with the layer stacking of images from 2015 and 2016. Four tree species were classified using Spectral angle mapper (SAM), Spectral information divergence (SID) and Random Forest (RF). The results showed that SAM and SID caused an overfitting of the data whereas RF showed better results and the use of the layer stacking improved the classification achieving a kappa coefficient of 18.26 %.

  14. Multispectral Filter Arrays: Recent Advances and Practical Implementation

    PubMed Central

    Lapray, Pierre-Jean; Wang, Xingbo; Thomas, Jean-Baptiste; Gouton, Pierre

    2014-01-01

    Thanks to some technical progress in interferencefilter design based on different technologies, we can finally successfully implement the concept of multispectral filter array-based sensors. This article provides the relevant state-of-the-art for multispectral imaging systems and presents the characteristics of the elements of our multispectral sensor as a case study. The spectral characteristics are based on two different spatial arrangements that distribute eight different bandpass filters in the visible and near-infrared area of the spectrum. We demonstrate that the system is viable and evaluate its performance through sensor spectral simulation. PMID:25407904

  15. Nanohole-array-based device for 2D snapshot multispectral imaging

    PubMed Central

    Najiminaini, Mohamadreza; Vasefi, Fartash; Kaminska, Bozena; Carson, Jeffrey J. L.

    2013-01-01

    We present a two-dimensional (2D) snapshot multispectral imager that utilizes the optical transmission characteristics of nanohole arrays (NHAs) in a gold film to resolve a mixture of input colors into multiple spectral bands. The multispectral device consists of blocks of NHAs, wherein each NHA has a unique periodicity that results in transmission resonances and minima in the visible and near-infrared regions. The multispectral device was illuminated over a wide spectral range, and the transmission was spectrally unmixed using a least-squares estimation algorithm. A NHA-based multispectral imaging system was built and tested in both reflection and transmission modes. The NHA-based multispectral imager was capable of extracting 2D multispectral images representative of four independent bands within the spectral range of 662 nm to 832 nm for a variety of targets. The multispectral device can potentially be integrated into a variety of imaging sensor systems. PMID:24005065

  16. Spectrum slicer for snapshot spectral imaging

    NASA Astrophysics Data System (ADS)

    Tamamitsu, Miu; Kitagawa, Yutaro; Nakagawa, Keiichi; Horisaki, Ryoichi; Oishi, Yu; Morita, Shin-ya; Yamagata, Yutaka; Motohara, Kentaro; Goda, Keisuke

    2015-12-01

    We propose and demonstrate an optical component that overcomes critical limitations in our previously demonstrated high-speed multispectral videography-a method in which an array of periscopes placed in a prism-based spectral shaper is used to achieve snapshot multispectral imaging with the frame rate only limited by that of an image-recording sensor. The demonstrated optical component consists of a slicing mirror incorporated into a 4f-relaying lens system that we refer to as a spectrum slicer (SS). With its simple design, we can easily increase the number of spectral channels without adding fabrication complexity while preserving the capability of high-speed multispectral videography. We present a theoretical framework for the SS and its experimental utility to spectral imaging by showing real-time monitoring of a dynamic colorful event through five different visible windows.

  17. A new approach for fast indexing of hyperspectral image data for knowledge retrieval and mining

    NASA Astrophysics Data System (ADS)

    Clowers, Robert; Dua, Sumeet

    2005-11-01

    Multispectral sensors produce images with a few relatively broad wavelength bands. Hyperspectral remote sensors, on the other hand, collect image data simultaneously in dozens or hundreds of narrow and adjacent spectral bands. These measurements make it possible to derive a continuous spectrum for each image cell, generating an image cube across multiple spectral components. Hyperspectral imaging has sound applications in a variety of areas such as mineral exploration, hazardous waste remediation, mapping habitat, invasive vegetation, eco system monitoring, hazardous gas detection, mineral detection, soil degradation, and climate change. This image has a strong potential for transforming the imaging paradigms associated with several design and manufacturing processes. In this paper, we describe a novel approach for fast indexing of multi-dimensional hyperspectral image data, especially for data mining applications. The index exploits the spectral and spatial relationships embedded in these image sets. The index will be employed for knowledge retrieval applications that require fast information interpretation approaches. The index can also be deployed in real-time mission-critical domains, as it is shown to exhibit speed with high degrees of dimensionality associated with the data. The strength of this index in terms of degree of false dismissals and false alarms will also be demonstrated. The paper will highlight some common applications of this imaging computational paradigm and will conclude with directions for future improvement and investigation.

  18. Operational calibration and validation of landsat data continuity mission (LDCM) sensors using the image assessment system (IAS)

    USGS Publications Warehouse

    Micijevic, Esad; Morfitt, Ron

    2010-01-01

    Systematic characterization and calibration of the Landsat sensors and the assessment of image data quality are performed using the Image Assessment System (IAS). The IAS was first introduced as an element of the Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) ground segment and recently extended to Landsat 4 (L4) and 5 (L5) Thematic Mappers (TM) and Multispectral Sensors (MSS) on-board the Landsat 1-5 satellites. In preparation for the Landsat Data Continuity Mission (LDCM), the IAS was developed for the Earth Observer 1 (EO-1) Advanced Land Imager (ALI) with a capability to assess pushbroom sensors. This paper describes the LDCM version of the IAS and how it relates to unique calibration and validation attributes of its on-board imaging sensors. The LDCM IAS system will have to handle a significantly larger number of detectors and the associated database than the previous IAS versions. An additional challenge is that the LDCM IAS must handle data from two sensors, as the LDCM products will combine the Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) spectral bands.

  19. Spatial Metadata for Global Change Investigations Using Remote Sensing

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Quattrochi, Dale A.; Lam, Nina Siu-Ngan; Arnold, James E. (Technical Monitor)

    2002-01-01

    Satellite and aircraft-borne remote sensors have gathered petabytes of data over the past 30+ years. These images are an important resource for establishing cause and effect relationships between human-induced land cover changes and alterations in climate and other biophysical patterns at local to global scales. However, the spatial, temporal, and spectral characteristics of these datasets vary, thus complicating long-term studies involving several types of imagery. As the geographical and temporal coverage, the spectral and spatial resolution, and the number of individual sensors increase, the sheer volume and complexity of available data sets will complicate management and use of the rapidly growing archive of earth imagery. Mining this vast data resource for images that provide the necessary information for climate change studies becomes more difficult as more sensors are launched and more imagery is obtained.

  20. Characterization techniques for incorporating backgrounds into DIRSIG

    NASA Astrophysics Data System (ADS)

    Brown, Scott D.; Schott, John R.

    2000-07-01

    The appearance of operation hyperspectral imaging spectrometers in both solar and thermal regions has lead to the development of a variety of spectral detection algorithms. The development and testing of these algorithms requires well characterized field collection campaigns that can be time and cost prohibitive. Radiometrically robust synthetic image generation (SIG) environments that can generate appropriate images under a variety of atmospheric conditions and with a variety of sensors offers an excellent supplement to reduce the scope of the expensive field collections. In addition, SIG image products provide the algorithm developer with per-pixel truth, allowing for improved characterization of the algorithm performance. To meet the needs of the algorithm development community, the image modeling community needs to supply synthetic image products that contain all the spatial and spectral variability present in real world scenes, and that provide the large area coverage typically acquired with actual sensors. This places a heavy burden on synthetic scene builders to construct well characterized scenes that span large areas. Several SIG models have demonstrated the ability to accurately model targets (vehicles, buildings, etc.) Using well constructed target geometry (from CAD packages) and robust thermal and radiometry models. However, background objects (vegetation, infrastructure, etc.) dominate the percentage of real world scene pixels and utilizing target building techniques is time and resource prohibitive. This paper discusses new methods that have been integrated into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model to characterize backgrounds. The new suite of scene construct types allows the user to incorporate both terrain and surface properties to obtain wide area coverage. The terrain can be incorporated using a triangular irregular network (TIN) derived from elevation data or digital elevation model (DEM) data from actual sensors, temperature maps, spectral reflectance cubes (possible derived from actual sensors), and/or material and mixture maps. Descriptions and examples of each new technique are presented as well as hybrid methods to demonstrate target embedding in real world imagery.

  1. Cross-Calibration of Earth Observing System Terra Satellite Sensors MODIS and ASTER

    NASA Technical Reports Server (NTRS)

    McCorkel, J.

    2014-01-01

    The Advanced Spaceborne Thermal Emissive and Reflection Radiometer (ASTER) and Moderate Resolution Imaging Spectrometer (MODIS) are two of the five sensors onboard the Earth Observing System's Terra satellite. These sensors share many similar spectral channels while having much different spatial and operational parameters. ASTER is a tasked sensor and sometimes referred to a zoom camera of the MODIS that collects a full-earth image every one to two days. It is important that these sensors have a consistent characterization and calibration for continued development and use of their data products. This work uses a variety of test sites to retrieve and validate intercalibration results. The refined calibration of Collection 6 of the Terra MODIS data set is leveraged to provide the up-to-date reference for trending and validation of ASTER. Special attention is given to spatially matching radiance measurements using prelaunch spatial response characterization of MODIS. Despite differences in spectral band properties and spatial scales, ASTER-MODIS is an ideal case for intercomparison since the sensors have nearly identical views and acquisitions times and therefore can be used as a baseline of intercalibration performance of other satellite sensor pairs.

  2. Practical considerations in experimental computational sensing

    NASA Astrophysics Data System (ADS)

    Poon, Phillip K.

    Computational sensing has demonstrated the ability to ameliorate or eliminate many trade-offs in traditional sensors. Rather than attempting to form a perfect image, then sampling at the Nyquist rate, and reconstructing the signal of interest prior to post-processing, the computational sensor attempts to utilize a priori knowledge, active or passive coding of the signal-of-interest combined with a variety of algorithms to overcome the trade-offs or to improve various task-specific metrics. While it is a powerful approach to radically new sensor architectures, published research tends to focus on architecture concepts and positive results. Little attention is given towards the practical issues when faced with implementing computational sensing prototypes. I will discuss the various practical challenges that I encountered while developing three separate applications of computational sensors. The first is a compressive sensing based object tracking camera, the SCOUT, which exploits the sparsity of motion between consecutive frames while using no moving parts to create a psuedo-random shift variant point-spread function. The second is a spectral imaging camera, the AFSSI-C, which uses a modified version of Principal Component Analysis with a Bayesian strategy to adaptively design spectral filters for direct spectral classification using a digital micro-mirror device (DMD) based architecture. The third demonstrates two separate architectures to perform spectral unmixing by using an adaptive algorithm or a hybrid techniques of using Maximum Noise Fraction and random filter selection from a liquid crystal on silicon based computational spectral imager, the LCSI. All of these applications demonstrate a variety of challenges that have been addressed or continue to challenge the computational sensing community. One issue is calibration, since many computational sensors require an inversion step and in the case of compressive sensing, lack of redundancy in the measurement data. Another issue is over multiplexing, as more light is collected per sample, the finite amount of dynamic range and quantization resolution can begin to degrade the recovery of the relevant information. A priori knowledge of the sparsity and or other statistics of the signal or noise is often used by computational sensors to outperform their isomorphic counterparts. This is demonstrated in all three of the sensors I have developed. These challenges and others will be discussed using a case-study approach through these three applications.

  3. Development and Operation of a Material Identification and Discrimination Imaging Spectroradiometer

    NASA Technical Reports Server (NTRS)

    Dombrowski, Mark; Willson, paul; LaBaw, Clayton

    1997-01-01

    Many imaging applications require quantitative determination of a scene's spectral radiance. This paper describes a new system capable of real-time spectroradiometric imagery. Operating at a full-spectrum update rate of 30Hz, this imager is capable of collecting a 30 point spectrum from each of three imaging heads: the first operates from 400 nm to 950 nm, with a 2% bandwidth; the second operates from 1.5 micro-m to 5.5 micro-m with a 1.5% bandwidth; the third operates from 5 micro-m to 12 micro-m, also at a 1.5% bandwidth. Standard image format is 256 x 256, with 512 x 512 possible in the VIS/NIR head. Spectra of up to 256 points are available at proportionately lower frame rates. In order to make such a tremendous amount of data more manageable, internal processing electronics perform four important operations on the spectral imagery data in real-time. First, all data in the spatial/spectral cube of data is spectro-radiometrically calibrated as it is collected. Second, to allow the imager to simulate sensors with arbitrary spectral response, any set of three spectral response functions may be loaded into the imager including delta functions to allow single wavelength viewing; the instrument then evaluates the integral of the product of the scene spectral radiances and the response function. Third, more powerful exploitation of the gathered spectral radiances can be effected by application of various spectral-matched filtering algorithms to identify pixels whose relative spectral radiance distribution matches a sought-after spectral radiance distribution, allowing materials-based identification and discrimination. Fourth, the instrument allows determination of spectral reflectance, surface temperature, and spectral emissivity, also in real-time. The spectral imaging technique used in the instrument allows tailoring of the frame rate and/or the spectral bandwidth to suit the scene radiance levels, i.e., frame rate can be reduced, or bandwidth increased to improve SNR when viewing low radiance scenes. The unique challenges of design and calibration are described. Pixel readout rates of 160 MHz, for full frame readout rates of 1000 Hz (512 x 512 image) present the first challenge; processing rates of nearly 600 million integer operations per second for sensor emulation, or over 2 billion per second for matched filtering, present the second. Spatial and spectral calibration of 66,536 pixels (262,144 for the 512 x 512 version) and up to 1,000 spectral positions mandate novel decoupling methods to keep the required calibration memory to a reasonable size. Large radiometric dynamic range also requires care to maintain precision operation with minimum memory size.

  4. Automatic Suppression of Intense Monochromatic Light in Electro-Optical Sensors

    PubMed Central

    Ritt, Gunnar; Eberle, Bernd

    2012-01-01

    Electro-optical imaging sensors are widely distributed and used for many different tasks. Due to technical improvements, their pixel size has been steadily decreasing, resulting in a reduced saturation capacity. As a consequence, this progress makes them susceptible to intense point light sources. Developments in laser technology have led to very compact and powerful laser sources of any wavelength in the visible and near infrared spectral region, offered as laser pointers. The manifold of wavelengths makes it difficult to encounter sensor saturation over the complete operating waveband by conventional measures like absorption or interference filters. We present a concept for electro-optical sensors to suppress overexposure in the visible spectral region. The key element of the concept is a spatial light modulator in combination with wavelength multiplexing. This approach allows spectral filtering within a localized area in the field of view of the sensor. The system offers the possibility of automatic reduction of overexposure by monochromatic laser radiation. PMID:23202039

  5. Assessment of spectral band impact on intercalibration over desert sites using simulation based on EO-1 Hyperion data

    USGS Publications Warehouse

    Henry, P.; Chander, G.; Fougnie, B.; Thomas, C.; Xiong, Xiaoxiong

    2013-01-01

    Since the beginning of the 1990s, stable desert sites have been used for the calibration monitoring of many different sensors. Many attempts at sensor intercalibration have been also conducted using these stable desert sites. As a result, site characterization techniques and the quality of intercalibration techniques have gradually improved over the years. More recently, the Committee on Earth Observation Satellites has recommended a list of reference pseudo-invariant calibration sites for frequent image acquisition by multiple agencies. In general, intercalibration should use well-known or spectrally flat reference. The reflectance profile of desert sites, however, might not be flat or well characterized (from a fine spectral point of view). The aim of this paper is to assess the expected accuracy that can be reached when using desert sites for intercalibration. In order to have a well-mastered estimation of different errors or error sources, this study is performed with simulated data from a hyperspectral sensor. Earth Observing-1 Hyperion images are chosen to provide the simulation input data. Two different cases of intercalibration are considered, namely, Landsat 7 Enhanced Thematic Mapper Plus with Terra Moderate Resolution Imaging Spectroradiometer (MODIS) and Environmental Satellite MEdium Resolution Imaging Spectrometer (MERIS) with Aqua MODIS. The simulation results have confirmed that intercalibration accuracy of 1% to 2% can be achieved between sensors, provided there are a sufficient number of available measurements. The simulated intercalibrations allow explaining results obtained during real intercalibration exercises and to establish some recommendations for the use of desert sites for intercalibration.

  6. Web-based Data Exploration, Exploitation and Visualization Tools for Satellite Sensor VIS/IR Calibration Applications

    NASA Astrophysics Data System (ADS)

    Gopalan, A.; Doelling, D. R.; Scarino, B. R.; Chee, T.; Haney, C.; Bhatt, R.

    2016-12-01

    The CERES calibration group at NASA/LaRC has developed and deployed a suite of online data exploration and visualization tools targeted towards a range of spaceborne VIS/IR imager calibration applications for the Earth Science community. These web-based tools are driven by the open-source R (Language for Statistical Computing and Visualization) with a web interface for the user to customize the results according to their application. The tool contains a library of geostationary and sun-synchronous imager spectral response functions (SRF), incoming solar spectra, SCIAMACHY and Hyperion Earth reflected visible hyper-spectral data, and IASI IR hyper-spectral data. The suite of six specific web-based tools was designed to provide critical information necessary for sensor cross-calibration. One of the challenges of sensor cross-calibration is accounting for spectral band differences and may introduce biases if not handled properly. The spectral band adjustment factors (SBAF) are a function of the earth target, atmospheric and cloud conditions or scene type and angular conditions, when obtaining sensor radiance pairs. The SBAF will need to be customized for each inter-calibration target and sensor pair. The advantages of having a community open source tool are: 1) only one archive of SCIAMACHY, Hyperion, and IASI datasets needs to be maintained, which is on the order of 50TB. 2) the framework will allow easy incorporation of new satellite SRFs and hyper-spectral datasets and associated coincident atmospheric and cloud properties, such as PW. 3) web tool or SBAF algorithm improvements or suggestions when incorporated can benefit the community at large. 4) The customization effort is on the user rather than on the host. In this paper we discuss each of these tools in detail and explore the variety of advanced options that can be used to constrain the results along with specific use cases to highlight the value-added by these datasets.

  7. Low noise WDR ROIC for InGaAs SWIR image sensor

    NASA Astrophysics Data System (ADS)

    Ni, Yang

    2017-11-01

    Hybridized image sensors are actually the only solution for image sensing beyond the spectral response of silicon devices. By hybridization, we can combine the best sensing material and photo-detector design with high performance CMOS readout circuitry. In the infrared band, we are facing typically 2 configurations: high background situation and low background situation. The performance of high background sensors are conditioned mainly by the integration capacity in each pixel which is the case for mid-wave and long-wave infrared detectors. For low background situation, the detector's performance is mainly limited by the pixel's noise performance which is conditioned by dark signal and readout noise. In the case of reflection based imaging condition, the pixel's dynamic range is also an important parameter. This is the case for SWIR band imaging. We are particularly interested by InGaAs based SWIR image sensors.

  8. Parallel-multiplexed excitation light-sheet microscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Xu, Dongli; Zhou, Weibin; Peng, Leilei

    2017-02-01

    Laser scanning light-sheet imaging allows fast 3D image of live samples with minimal bleach and photo-toxicity. Existing light-sheet techniques have very limited capability in multi-label imaging. Hyper-spectral imaging is needed to unmix commonly used fluorescent proteins with large spectral overlaps. However, the challenge is how to perform hyper-spectral imaging without sacrificing the image speed, so that dynamic and complex events can be captured live. We report wavelength-encoded structured illumination light sheet imaging (λ-SIM light-sheet), a novel light-sheet technique that is capable of parallel multiplexing in multiple excitation-emission spectral channels. λ-SIM light-sheet captures images of all possible excitation-emission channels in true parallel. It does not require compromising the imaging speed and is capable of distinguish labels by both excitation and emission spectral properties, which facilitates unmixing fluorescent labels with overlapping spectral peaks and will allow more labels being used together. We build a hyper-spectral light-sheet microscope that combined λ-SIM with an extended field of view through Bessel beam illumination. The system has a 250-micron-wide field of view and confocal level resolution. The microscope, equipped with multiple laser lines and an unlimited number of spectral channels, can potentially image up to 6 commonly used fluorescent proteins from blue to red. Results from in vivo imaging of live zebrafish embryos expressing various genetic markers and sensors will be shown. Hyper-spectral images from λ-SIM light-sheet will allow multiplexed and dynamic functional imaging in live tissue and animals.

  9. Advanced processing for high-bandwidth sensor systems

    NASA Astrophysics Data System (ADS)

    Szymanski, John J.; Blain, Phil C.; Bloch, Jeffrey J.; Brislawn, Christopher M.; Brumby, Steven P.; Cafferty, Maureen M.; Dunham, Mark E.; Frigo, Janette R.; Gokhale, Maya; Harvey, Neal R.; Kenyon, Garrett; Kim, Won-Ha; Layne, J.; Lavenier, Dominique D.; McCabe, Kevin P.; Mitchell, Melanie; Moore, Kurt R.; Perkins, Simon J.; Porter, Reid B.; Robinson, S.; Salazar, Alfonso; Theiler, James P.; Young, Aaron C.

    2000-11-01

    Compute performance and algorithm design are key problems of image processing and scientific computing in general. For example, imaging spectrometers are capable of producing data in hundreds of spectral bands with millions of pixels. These data sets show great promise for remote sensing applications, but require new and computationally intensive processing. The goal of the Deployable Adaptive Processing Systems (DAPS) project at Los Alamos National Laboratory is to develop advanced processing hardware and algorithms for high-bandwidth sensor applications. The project has produced electronics for processing multi- and hyper-spectral sensor data, as well as LIDAR data, while employing processing elements using a variety of technologies. The project team is currently working on reconfigurable computing technology and advanced feature extraction techniques, with an emphasis on their application to image and RF signal processing. This paper presents reconfigurable computing technology and advanced feature extraction algorithm work and their application to multi- and hyperspectral image processing. Related projects on genetic algorithms as applied to image processing will be introduced, as will the collaboration between the DAPS project and the DARPA Adaptive Computing Systems program. Further details are presented in other talks during this conference and in other conferences taking place during this symposium.

  10. Modification of measurement methods for evaluation of tissue-engineered cartilage function and biochemical properties using nanosecond pulsed laser

    NASA Astrophysics Data System (ADS)

    Ishihara, Miya; Sato, Masato; Kutsuna, Toshiharu; Ishihara, Masayuki; Mochida, Joji; Kikuchi, Makoto

    2008-02-01

    There is a demand in the field of regenerative medicine for measurement technology that enables determination of functions and components of engineered tissue. To meet this demand, we developed a method for extracellular matrix characterization using time-resolved autofluorescence spectroscopy, which enabled simultaneous measurements with mechanical properties using relaxation of laser-induced stress wave. In this study, in addition to time-resolved fluorescent spectroscopy, hyperspectral sensor, which enables to capture both spectral and spatial information, was used for evaluation of biochemical characterization of tissue-engineered cartilage. Hyperspectral imaging system provides spectral resolution of 1.2 nm and image rate of 100 images/sec. The imaging system consisted of the hyperspectral sensor, a scanner for x-y plane imaging, magnifying optics and Xenon lamp for transmmissive lighting. Cellular imaging using the hyperspectral image system has been achieved by improvement in spatial resolution up to 9 micrometer. The spectroscopic cellular imaging could be observed using cultured chondrocytes as sample. At early stage of culture, the hyperspectral imaging offered information about cellular function associated with endogeneous fluorescent biomolecules.

  11. Imaging Spectroscopy Enables Novel Applications and Continuity with the Landsat Record to Sustain Legacy Applications: An Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Landsat 8 OLI Case Study

    NASA Astrophysics Data System (ADS)

    Stavros, E. N.; Seidel, F.; Cable, M. L.; Green, R. O.; Freeman, A.

    2017-12-01

    While, imaging spectrometers offer additional information that provide value added products for applications that are otherwise underserved, there is need to demonstrate their ability to augment the multi-spectral (e.g., Landsat) optical record by both providing more frequent temporal revisit and lengthening the existing record. Here we test the hypothesis that imaging spectroscopic optical data is compatible with multi-spectral data to within ±5% radiometric accuracy, as desirable to continue the long-term Landsat data record. We use a coincident Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) flight with over-passing Operational Land Imager (OLI) data on Landsat 8 to document a procedure for simulating OLI multi-spectral bands from AVIRIS, evaluate influencing factors on the observed radiance, and assess AVIRIS radiometric accuracy compared to OLI. The procedure for simulating OLI data includes spectral convolution, accounting for atmospheric effects introduced by different sensor altitude and viewing geometries, and spatial resampling. After accounting for these influences, we expect the remaining differences between the simulated and the real OLI data result from differences in sensor calibration, surface bi-directional reflectance, from the different viewing geometries, and spatial sampling. The median radiometric percent difference for each band in the data used range from 0.6% to 8.3%. After bias-correction to minimize potential calibration discrepancies, we find no more than 1.2% radiometric percent difference for any OLI band. This analysis therefore successfully demonstrates that imaging spectrometer data can not only address novel applications, but also contribute to the Landsat-type or other multi-spectral data records to sustain legacy applications.

  12. BOREAS RSS-2 Level-1B ASAS Image Data: At-Sensor Radiance in BSQ Format

    NASA Technical Reports Server (NTRS)

    Russell, C.; Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Dabney, P. W.; Kovalick, W.; Graham, D.; Bur, Michael; Irons, James R.; Tierney, M.

    2000-01-01

    The BOREAS RSS-2 team used the ASAS instrument, mounted on the NASA C-130 aircraft, to create at-sensor radiance images of various sites as a function of spectral wavelength, view geometry (combinations of view zenith angle, view azimuth angle, solar zenith angle, and solar azimuth angle), and altitude. The level-1b ASAS images of the BOREAS study areas were collected from April to September 1994 and March to July 1996.

  13. Radiometric and geometric assessment of data from the RapidEye constellation of satellites

    USGS Publications Warehouse

    Chander, Gyanesh; Haque, Md. Obaidul; Sampath, Aparajithan; Brunn, A.; Trosset, G.; Hoffmann, D.; Roloff, S.; Thiele, M.; Anderson, C.

    2013-01-01

    To monitor land surface processes over a wide range of temporal and spatial scales, it is critical to have coordinated observations of the Earth's surface using imagery acquired from multiple spaceborne imaging sensors. The RapidEye (RE) satellite constellation acquires high-resolution satellite images covering the entire globe within a very short period of time by sensors identical in construction and cross-calibrated to each other. To evaluate the RE high-resolution Multi-spectral Imager (MSI) sensor capabilities, a cross-comparison between the RE constellation of sensors was performed first using image statistics based on large common areas observed over pseudo-invariant calibration sites (PICS) by the sensors and, second, by comparing the on-orbit radiometric calibration temporal trending over a large number of calibration sites. For any spectral band, the individual responses measured by the five satellites of the RE constellation were found to differ <2–3% from the average constellation response depending on the method used for evaluation. Geometric assessment was also performed to study the positional accuracy and relative band-to-band (B2B) alignment of the image data sets. The position accuracy was assessed by comparing the RE imagery against high-resolution aerial imagery, while the B2B characterization was performed by registering each band against every other band to ensure that the proper band alignment is provided for an image product. The B2B results indicate that the internal alignments of these five RE bands are in agreement, with bands typically registered to within 0.25 pixels of each other or better.

  14. Efficient single-pixel multispectral imaging via non-mechanical spatio-spectral modulation.

    PubMed

    Li, Ziwei; Suo, Jinli; Hu, Xuemei; Deng, Chao; Fan, Jingtao; Dai, Qionghai

    2017-01-27

    Combining spectral imaging with compressive sensing (CS) enables efficient data acquisition by fully utilizing the intrinsic redundancies in natural images. Current compressive multispectral imagers, which are mostly based on array sensors (e.g, CCD or CMOS), suffer from limited spectral range and relatively low photon efficiency. To address these issues, this paper reports a multispectral imaging scheme with a single-pixel detector. Inspired by the spatial resolution redundancy of current spatial light modulators (SLMs) relative to the target reconstruction, we design an all-optical spectral splitting device to spatially split the light emitted from the object into several counterparts with different spectrums. Separated spectral channels are spatially modulated simultaneously with individual codes by an SLM. This no-moving-part modulation ensures a stable and fast system, and the spatial multiplexing ensures an efficient acquisition. A proof-of-concept setup is built and validated for 8-channel multispectral imaging within 420~720 nm wavelength range on both macro and micro objects, showing a potential for efficient multispectral imager in macroscopic and biomedical applications.

  15. Compressive spectral testbed imaging system based on thin-film color-patterned filter arrays.

    PubMed

    Rueda, Hoover; Arguello, Henry; Arce, Gonzalo R

    2016-11-20

    Compressive spectral imaging systems can reliably capture multispectral data using far fewer measurements than traditional scanning techniques. In this paper, a thin-film patterned filter array-based compressive spectral imager is demonstrated, including its optical design and implementation. The use of a patterned filter array entails a single-step three-dimensional spatial-spectral coding on the input data cube, which provides higher flexibility on the selection of voxels being multiplexed on the sensor. The patterned filter array is designed and fabricated with micrometer pitch size thin films, referred to as pixelated filters, with three different wavelengths. The performance of the system is evaluated in terms of references measured by a commercially available spectrometer and the visual quality of the reconstructed images. Different distributions of the pixelated filters, including random and optimized structures, are explored.

  16. Study of sensor spectral responses and data processing algorithms and architectures for onboard feature identification

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Davis, R. E.; Fales, C. L.; Aherron, R. M.

    1982-01-01

    A computational model of the deterministic and stochastic processes involved in remote sensing is used to study spectral feature identification techniques for real-time onboard processing of data acquired with advanced earth-resources sensors. Preliminary results indicate that: Narrow spectral responses are advantageous; signal normalization improves mean-square distance (MSD) classification accuracy but tends to degrade maximum-likelihood (MLH) classification accuracy; and MSD classification of normalized signals performs better than the computationally more complex MLH classification when imaging conditions change appreciably from those conditions during which reference data were acquired. The results also indicate that autonomous categorization of TM signals into vegetation, bare land, water, snow and clouds can be accomplished with adequate reliability for many applications over a reasonably wide range of imaging conditions. However, further analysis is required to develop computationally efficient boundary approximation algorithms for such categorization.

  17. Preliminary Analysis of the Performance of the Landsat 8/OLI Land Surface Reflectance Product

    NASA Technical Reports Server (NTRS)

    Vermote, Eric; Justice, Chris; Claverie, Martin; Franch, Belen

    2016-01-01

    The surface reflectance, i.e., satellite derived top of atmosphere (TOA) reflectance corrected for the temporally, spatially and spectrally varying scattering and absorbing effects of atmospheric gases and aerosols, is needed to monitor the land surface reliably. For this reason, the surface reflectance, and not TOA reflectance, is used to generate the greater majority of global land products, for example, from the Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) sensors. Even if atmospheric effects are minimized by sensor design, atmospheric effects are still challenging to correct. In particular, the strong impact of aerosols in the visible and near infrared spectral range can be difficult to correct, because they can be highly discrete in space and time (e.g., smoke plumes) and because of the complex scattering and absorbing properties of aerosols that vary spectrally and with aerosol size, shape, chemistry and density.

  18. Research on hyperspectral dynamic scene and image sequence simulation

    NASA Astrophysics Data System (ADS)

    Sun, Dandan; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei

    2016-10-01

    This paper presents a simulation method of hyper-spectral dynamic scene and image sequence for hyper-spectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyper-spectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyper-spectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyper-spectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyper-spectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyper-spectral images are consistent with the theoretical analysis results.

  19. Analysis of the boreal forest-tundra ecotone: A test of AVIRIS capabilities in the Eastern Canadian subarctic

    NASA Technical Reports Server (NTRS)

    Goward, Samuel N.; Petzold, Donald E.

    1989-01-01

    A comparison was conducted between ground reflectance spectra collected in Schefferville, Canada and imaging spectrometer observations acquired by the AVIRIS sensor in a flight of the ER-2 Aircraft over the same region. The high spectral contrasts present in the Canadian Subarctic appeared to provide an effective test of the operational readiness of the AVIRIS sensor. Previous studies show that in this location various land cover materials possess a wide variety of visible/near infrared reflectance properties. Thus, this landscape served as an excellent test for the sensing variabilities of the newly developed AVIRIS sensor. An underlying hypothesis was that the unique visible/near infrared spectral reflectance patterns of Subarctic lichens could be detected from high altitudes by this advanced imaging spectrometer. The relation between lichen occurrence and boreal forest-tundra ecotone dynamics was investigated.

  20. Simulation of the hyperspectral data from multispectral data using Python programming language

    NASA Astrophysics Data System (ADS)

    Tiwari, Varun; Kumar, Vinay; Pandey, Kamal; Ranade, Rigved; Agarwal, Shefali

    2016-04-01

    Multispectral remote sensing (MRS) sensors have proved their potential in acquiring and retrieving information of Land Use Land (LULC) Cover features in the past few decades. These MRS sensor generally acquire data within limited broad spectral bands i.e. ranging from 3 to 10 number of bands. The limited number of bands and broad spectral bandwidth in MRS sensors becomes a limitation in detailed LULC studies as it is not capable of distinguishing spectrally similar LULC features. On the counterpart, fascinating detailed information available in hyperspectral (HRS) data is spectrally over determined and able to distinguish spectrally similar material of the earth surface. But presently the availability of HRS sensors is limited. This is because of the requirement of sensitive detectors and large storage capability, which makes the acquisition and processing cumbersome and exorbitant. So, there arises a need to utilize the available MRS data for detailed LULC studies. Spectral reconstruction approach is one of the technique used for simulating hyperspectral data from available multispectral data. In the present study, spectral reconstruction approach is utilized for the simulation of hyperspectral data using EO-1 ALI multispectral data. The technique is implemented using python programming language which is open source in nature and possess support for advanced imaging processing libraries and utilities. Over all 70 bands have been simulated and validated using visual interpretation, statistical and classification approach.

  1. Airborne measurements in the infrared using FTIR-based imaging hyperspectral sensors

    NASA Astrophysics Data System (ADS)

    Puckrin, E.; Turcotte, C. S.; Lahaie, P.; Dubé, D.; Lagueux, P.; Farley, V.; Marcotte, F.; Chamberland, M.

    2009-09-01

    Hyperspectral ground mapping is being used in an ever-increasing extent for numerous applications in the military, geology and environmental fields. The different regions of the electromagnetic spectrum help produce information of differing nature. The visible, near-infrared and short-wave infrared radiation (400 nm to 2.5 μm) has been mostly used to analyze reflected solar light, while the mid-wave (3 to 5 μm) and long-wave (8 to 12 μm or thermal) infrared senses the self-emission of molecules directly, enabling the acquisition of data during night time. Push-broom dispersive sensors have been typically used for airborne hyperspectral mapping. However, extending the spectral range towards the mid-wave and long-wave infrared brings performance limitations due to the self emission of the sensor itself. The Fourier-transform spectrometer technology has been extensively used in the infrared spectral range due to its high transmittance as well as throughput and multiplex advantages, thereby reducing the sensor self-emission problem. Telops has developed the Hyper-Cam, a rugged and compact infrared hyperspectral imager. The Hyper-Cam is based on the Fourier-transform technology yielding high spectral resolution and enabling high accuracy radiometric calibration. It provides passive signature measurement capability, with up to 320x256 pixels at spectral resolutions of up to 0.25 cm-1. The Hyper-Cam has been used on the ground in several field campaigns, including the demonstration of standoff chemical agent detection. More recently, the Hyper-Cam has been integrated into an airplane to provide airborne measurement capabilities. A special pointing module was designed to compensate for airplane attitude and forward motion. To our knowledge, the Hyper-Cam is the first commercial airborne hyperspectral imaging sensor based on Fourier-transform infrared technology. The first airborne measurements and some preliminary performance criteria for the Hyper-Cam are presented in this paper.

  2. Airborne measurements in the infrared using FTIR-based imaging hyperspectral sensors

    NASA Astrophysics Data System (ADS)

    Puckrin, E.; Turcotte, C. S.; Lahaie, P.; Dubé, D.; Farley, V.; Lagueux, P.; Marcotte, F.; Chamberland, M.

    2009-05-01

    Hyperspectral ground mapping is being used in an ever-increasing extent for numerous applications in the military, geology and environmental fields. The different regions of the electromagnetic spectrum help produce information of differing nature. The visible, near-infrared and short-wave infrared radiation (400 nm to 2.5 μm) has been mostly used to analyze reflected solar light, while the mid-wave (3 to 5 μm) and long-wave (8 to 12 μm or thermal) infrared senses the self-emission of molecules directly, enabling the acquisition of data during night time. Push-broom dispersive sensors have been typically used for airborne hyperspectral mapping. However, extending the spectral range towards the mid-wave and long-wave infrared brings performance limitations due to the self emission of the sensor itself. The Fourier-transform spectrometer technology has been extensively used in the infrared spectral range due to its high transmittance as well as throughput and multiplex advantages, thereby reducing the sensor self-emission problem. Telops has developed the Hyper-Cam, a rugged and compact infrared hyperspectral imager. The Hyper-Cam is based on the Fourier-transform technology yielding high spectral resolution and enabling high accuracy radiometric calibration. It provides passive signature measurement capability, with up to 320x256 pixels at spectral resolutions of up to 0.25 cm-1. The Hyper-Cam has been used on the ground in several field campaigns, including the demonstration of standoff chemical agent detection. More recently, the Hyper-Cam has been integrated into an airplane to provide airborne measurement capabilities. A special pointing module was designed to compensate for airplane attitude and forward motion. To our knowledge, the Hyper-Cam is the first commercial airborne hyperspectral imaging sensor based on Fourier-transform infrared technology. The first airborne measurements and some preliminary performance criteria for the Hyper-Cam are presented in this paper.

  3. iCATSI: multi-pixel imaging differential spectroradiometer for standoff detection and quantification of chemical threats

    NASA Astrophysics Data System (ADS)

    Prel, Florent; Moreau, Louis; Lavoie, Hugo; Bouffard, François; Thériault, Jean-Marc; Vallieres, Christian; Roy, Claude; Dubé, Denis

    2011-11-01

    Homeland security and first responders are often faced with safety situations involving the identification of unknown volatile chemicals. Examples include industrial fires, chemical warfare, industrial leak, etc. The Improved Compact ATmospheric Sounding Interferometer (iCATSI) sensor has been developed to investigate the standoff detection and identification of toxic industrial chemicals (TICs), chemical warfare agents (CWA) and other chemicals. iCATSI is a combination of the CATSI instrument, a standoff differential FTIR optimised for the characterization of chemicals and the MR-i, the hyperspectral imaging spectroradiometer of ABB Bomem based on the proven MR spectroradiometers. The instrument is equipped with a dual-input telescope to perform optical background subtraction. The resulting signal is the difference between the spectral radiance entering each input port. With that method, the signal from the background is automatically removed from the signal of the target of interest. The iCATSI sensor is able to detect, spectrally resolve and identify 5 meters plumes up to 5 km range. The instrument is capable of sensing in the VLWIR (cut-off near 14 μm) to support research related to standoff chemical detection. In one of its configurations, iCATSI produces three 24 × 16 spectral images per second from 5.5 to 14 μm at a spectral resolution of 16 cm-1. In another configuration, iCATSI produces from two to four spectral images per second of 256 × 256 pixels from 8 to 13 μm with the same spectral resolution. Overview of the capabilities of the instrument and results from tests and field trials will be presented.

  4. Summary of current radiometric calibration coefficients for Landsat MSS, TM, ETM+, and EO-1 ALI sensors

    USGS Publications Warehouse

    Chander, G.; Markham, B.L.; Helder, D.L.

    2009-01-01

    This paper provides a summary of the current equations and rescaling factors for converting calibrated Digital Numbers (DNs) to absolute units of at-sensor spectral radiance, Top-Of-Atmosphere (TOA) reflectance, and at-sensor brightness temperature. It tabulates the necessary constants for the Multispectral Scanner (MSS), Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+), and Advanced Land Imager (ALI) sensors. These conversions provide a basis for standardized comparison of data in a single scene or between images acquired on different dates or by different sensors. This paper forms a needed guide for Landsat data users who now have access to the entire Landsat archive at no cost.

  5. Summary of Current Radiometric Calibration Coefficients for Landsat MSS, TM, ETM+, and EO-1 ALI Sensors

    NASA Technical Reports Server (NTRS)

    Chander, Gyanesh; Markham, Brian L.; Helder, Dennis L.

    2009-01-01

    This paper provides a summary of the current equations and rescaling factors for converting calibrated Digital Numbers (DNs) to absolute units of at-sensor spectral radiance, Top-Of- Atmosphere (TOA) reflectance, and at-sensor brightness temperature. It tabulates the necessary constants for the Multispectral Scanner (MSS), Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+), and Advanced Land Imager (ALI) sensors. These conversions provide a basis for standardized comparison of data in a single scene or between images acquired on different dates or by different sensors. This paper forms a needed guide for Landsat data users who now have access to the entire Landsat archive at no cost.

  6. Compressive hyperspectral and multispectral imaging fusion

    NASA Astrophysics Data System (ADS)

    Espitia, Óscar; Castillo, Sergio; Arguello, Henry

    2016-05-01

    Image fusion is a valuable framework which combines two or more images of the same scene from one or multiple sensors, allowing to improve the resolution of the images and increase the interpretable content. In remote sensing a common fusion problem consists of merging hyperspectral (HS) and multispectral (MS) images that involve large amount of redundant data, which ignores the highly correlated structure of the datacube along the spatial and spectral dimensions. Compressive HS and MS systems compress the spectral data in the acquisition step allowing to reduce the data redundancy by using different sampling patterns. This work presents a compressed HS and MS image fusion approach, which uses a high dimensional joint sparse model. The joint sparse model is formulated by combining HS and MS compressive acquisition models. The high spectral and spatial resolution image is reconstructed by using sparse optimization algorithms. Different fusion spectral image scenarios are used to explore the performance of the proposed scheme. Several simulations with synthetic and real datacubes show promising results as the reliable reconstruction of a high spectral and spatial resolution image can be achieved by using as few as just the 50% of the datacube.

  7. Performance analysis of improved methodology for incorporation of spatial/spectral variability in synthetic hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Scanlan, Neil W.; Schott, John R.; Brown, Scott D.

    2004-01-01

    Synthetic imagery has traditionally been used to support sensor design by enabling design engineers to pre-evaluate image products during the design and development stages. Increasingly exploitation analysts are looking to synthetic imagery as a way to develop and test exploitation algorithms before image data are available from new sensors. Even when sensors are available, synthetic imagery can significantly aid in algorithm development by providing a wide range of "ground truthed" images with varying illumination, atmospheric, viewing and scene conditions. One limitation of synthetic data is that the background variability is often too bland. It does not exhibit the spatial and spectral variability present in real data. In this work, four fundamentally different texture modeling algorithms will first be implemented as necessary into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model environment. Two of the models to be tested are variants of a statistical Z-Score selection model, while the remaining two involve a texture synthesis and a spectral end-member fractional abundance map approach, respectively. A detailed comparative performance analysis of each model will then be carried out on several texturally significant regions of the resultant synthetic hyperspectral imagery. The quantitative assessment of each model will utilize a set of three peformance metrics that have been derived from spatial Gray Level Co-Occurrence Matrix (GLCM) analysis, hyperspectral Signal-to-Clutter Ratio (SCR) measures, and a new concept termed the Spectral Co-Occurrence Matrix (SCM) metric which permits the simultaneous measurement of spatial and spectral texture. Previous research efforts on the validation and performance analysis of texture characterization models have been largely qualitative in nature based on conducting visual inspections of synthetic textures in order to judge the degree of similarity to the original sample texture imagery. The quantitative measures used in this study will in combination attempt to determine which texture characterization models best capture the correct statistical and radiometric attributes of the corresponding real image textures in both the spatial and spectral domains. The motivation for this work is to refine our understanding of the complexities of texture phenomena so that an optimal texture characterization model that can accurately account for these complexities can be eventually implemented into a synthetic image generation (SIG) model. Further, conclusions will be drawn regarding which of the candidate texture models are able to achieve realistic levels of spatial and spectral clutter, thereby permitting more effective and robust testing of hyperspectral algorithms in synthetic imagery.

  8. Performance analysis of improved methodology for incorporation of spatial/spectral variability in synthetic hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Scanlan, Neil W.; Schott, John R.; Brown, Scott D.

    2003-12-01

    Synthetic imagery has traditionally been used to support sensor design by enabling design engineers to pre-evaluate image products during the design and development stages. Increasingly exploitation analysts are looking to synthetic imagery as a way to develop and test exploitation algorithms before image data are available from new sensors. Even when sensors are available, synthetic imagery can significantly aid in algorithm development by providing a wide range of "ground truthed" images with varying illumination, atmospheric, viewing and scene conditions. One limitation of synthetic data is that the background variability is often too bland. It does not exhibit the spatial and spectral variability present in real data. In this work, four fundamentally different texture modeling algorithms will first be implemented as necessary into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model environment. Two of the models to be tested are variants of a statistical Z-Score selection model, while the remaining two involve a texture synthesis and a spectral end-member fractional abundance map approach, respectively. A detailed comparative performance analysis of each model will then be carried out on several texturally significant regions of the resultant synthetic hyperspectral imagery. The quantitative assessment of each model will utilize a set of three peformance metrics that have been derived from spatial Gray Level Co-Occurrence Matrix (GLCM) analysis, hyperspectral Signal-to-Clutter Ratio (SCR) measures, and a new concept termed the Spectral Co-Occurrence Matrix (SCM) metric which permits the simultaneous measurement of spatial and spectral texture. Previous research efforts on the validation and performance analysis of texture characterization models have been largely qualitative in nature based on conducting visual inspections of synthetic textures in order to judge the degree of similarity to the original sample texture imagery. The quantitative measures used in this study will in combination attempt to determine which texture characterization models best capture the correct statistical and radiometric attributes of the corresponding real image textures in both the spatial and spectral domains. The motivation for this work is to refine our understanding of the complexities of texture phenomena so that an optimal texture characterization model that can accurately account for these complexities can be eventually implemented into a synthetic image generation (SIG) model. Further, conclusions will be drawn regarding which of the candidate texture models are able to achieve realistic levels of spatial and spectral clutter, thereby permitting more effective and robust testing of hyperspectral algorithms in synthetic imagery.

  9. The Measurement of the Solar Spectral Irradiance Variability at 782 nm during the Solar Cycle 24 using the SES on-board PICARD

    NASA Astrophysics Data System (ADS)

    Meftah, Mustapha; Hauchecorne, Alain; Irbah, Abdanour; Bekki, Slimane

    2016-04-01

    A Sun Ecartometry Sensor (SES) was developed to provide the stringent pointing requirements of the PICARD satellite. The SES sensor produced an image of the Sun at 782+/-5 nm. From the SES data, we obtained a new time series of the solar spectral irradiance at 782nm from 2010 to 2014. SES observations provided a qualitatively consistent evolution of the solar spectral irradiance variability at 782 nm during the solar cycle 24. Comparisons will be made with Spectral And Total Irradiance REconstruction for the Satellite era (SATIRE-S) semi-empirical model and with the Spectral Irradiance Monitor instrument (SIM) on-board the Solar Radiation and Climate Experiment satellite (SORCE). These data will help to improve the representation of the solar forcing in the IPSL Global Circulation Model.

  10. Imager-to-Radiometer In-flight Cross Calibration: RSP Radiometric Comparison with Airborne and Satellite Sensors

    NASA Technical Reports Server (NTRS)

    McCorkel, Joel; Cairns, Brian; Wasilewski, Andrzej

    2016-01-01

    This work develops a method to compare the radiometric calibration between a radiometer and imagers hosted on aircraft and satellites. The radiometer is the airborne Research Scanning Polarimeter (RSP), which takes multi-angle, photo-polarimetric measurements in several spectral channels. The RSP measurements used in this work were coincident with measurements made by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), which was on the same aircraft. These airborne measurements were also coincident with an overpass of the Landsat 8 Operational Land Imager (OLI). First we compare the RSP and OLI radiance measurements to AVIRIS since the spectral response of the multispectral instruments can be used to synthesize a spectrally equivalent signal from the imaging spectrometer data. We then explore a method that uses AVIRIS as a transfer between RSP and OLI to show that radiometric traceability of a satellite-based imager can be used to calibrate a radiometer despite differences in spectral channel sensitivities. This calibration transfer shows agreement within the uncertainty of both the various instruments for most spectral channels.

  11. CMOS image sensors as an efficient platform for glucose monitoring.

    PubMed

    Devadhasan, Jasmine Pramila; Kim, Sanghyo; Choi, Cheol Soo

    2013-10-07

    Complementary metal oxide semiconductor (CMOS) image sensors have been used previously in the analysis of biological samples. In the present study, a CMOS image sensor was used to monitor the concentration of oxidized mouse plasma glucose (86-322 mg dL(-1)) based on photon count variation. Measurement of the concentration of oxidized glucose was dependent on changes in color intensity; color intensity increased with increasing glucose concentration. The high color density of glucose highly prevented photons from passing through the polydimethylsiloxane (PDMS) chip, which suggests that the photon count was altered by color intensity. Photons were detected by a photodiode in the CMOS image sensor and converted to digital numbers by an analog to digital converter (ADC). Additionally, UV-spectral analysis and time-dependent photon analysis proved the efficiency of the detection system. This simple, effective, and consistent method for glucose measurement shows that CMOS image sensors are efficient devices for monitoring glucose in point-of-care applications.

  12. Land mine detection using multispectral image fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, G.A.; Sengupta, S.K.; Aimonetti, W.D.

    1995-03-29

    Our system fuses information contained in registered images from multiple sensors to reduce the effects of clutter and improve the ability to detect surface and buried land mines. The sensor suite currently consists of a camera that acquires images in six bands (400nm, 500nm, 600nm, 700nm, 800nm and 900nm). Past research has shown that it is extremely difficult to distinguish land mines from background clutter in images obtained from a single sensor. It is hypothesized, however, that information fused from a suite of various sensors is likely to provide better detection reliability, because the suite of sensors detects a varietymore » of physical properties that are more separable in feature space. The materials surrounding the mines can include natural materials (soil, rocks, foliage, water, etc.) and some artifacts. We use a supervised learning pattern recognition approach to detecting the metal and plastic land mines. The overall process consists of four main parts: Preprocessing, feature extraction, feature selection, and classification. These parts are used in a two step process to classify a subimage. We extract features from the images, and use feature selection algorithms to select only the most important features according to their contribution to correct detections. This allows us to save computational complexity and determine which of the spectral bands add value to the detection system. The most important features from the various sensors are fused using a supervised learning pattern classifier (the probabilistic neural network). We present results of experiments to detect land mines from real data collected from an airborne platform, and evaluate the usefulness of fusing feature information from multiple spectral bands.« less

  13. Information-efficient spectral imaging sensor

    DOEpatents

    Sweatt, William C.; Gentry, Stephen M.; Boye, Clinton A.; Grotbeck, Carter L.; Stallard, Brian R.; Descour, Michael R.

    2003-01-01

    A programmable optical filter for use in multispectral and hyperspectral imaging. The filter splits the light collected by an optical telescope into two channels for each of the pixels in a row in a scanned image, one channel to handle the positive elements of a spectral basis filter and one for the negative elements of the spectral basis filter. Each channel for each pixel disperses its light into n spectral bins, with the light in each bin being attenuated in accordance with the value of the associated positive or negative element of the spectral basis vector. The spectral basis vector is constructed so that its positive elements emphasize the presence of a target and its negative elements emphasize the presence of the constituents of the background of the imaged scene. The attenuated light in the channels is re-imaged onto separate detectors for each pixel and then the signals from the detectors are combined to give an indication of the presence or not of the target in each pixel of the scanned scene. This system provides for a very efficient optical determination of the presence of the target, as opposed to the very data intensive data manipulations that are required in conventional hyperspectral imaging systems.

  14. ACTIM: an EDA initiated study on spectral active imaging

    NASA Astrophysics Data System (ADS)

    Steinvall, O.; Renhorn, I.; Ahlberg, J.; Larsson, H.; Letalick, D.; Repasi, E.; Lutzmann, P.; Anstett, G.; Hamoir, D.; Hespel, L.; Boucher, Y.

    2010-10-01

    This paper will describe ongoing work from an EDA initiated study on Active Imaging with emphasis of using multi or broadband spectral lasers and receivers. Present laser based imaging and mapping systems are mostly based on a fixed frequency lasers. On the other hand great progress has recently occurred in passive multi- and hyperspectral imaging with applications ranging from environmental monitoring and geology to mapping, military surveillance, and reconnaissance. Data bases on spectral signatures allow the possibility to discriminate between different materials in the scene. Present multi- and hyperspectral sensors mainly operate in the visible and short wavelength region (0.4-2.5 μm) and rely on the solar radiation giving shortcoming due to shadows, clouds, illumination angles and lack of night operation. Active spectral imaging however will largely overcome these difficulties by a complete control of the illumination. Active illumination enables spectral night and low-light operation beside a robust way of obtaining polarization and high resolution 2D/3D information. Recent development of broadband lasers and advanced imaging 3D focal plane arrays has led to new opportunities for advanced spectral and polarization imaging with high range resolution. Fusing the knowledge of ladar and passive spectral imaging will result in new capabilities in the field of EO-sensing to be shown in the study. We will present an overview of technology, systems and applications for active spectral imaging and propose future activities in connection with some prioritized applications.

  15. Sensing, Spectra and Scaling: What's in Store for Land Observations

    NASA Technical Reports Server (NTRS)

    Goetz, Alexander F. H.

    2001-01-01

    Bill Pecora's 1960's vision of the future, using spacecraft-based sensors for mapping the environment and exploring for resources, is being implemented today. New technology has produced better sensors in space such as the Landsat Thematic Mapper (TM) and SPOT, and creative researchers are continuing to find new applications. However, with existing sensors, and those intended for launch in this century, the potential for extracting information from the land surface is far from being exploited. The most recent technology development is imaging spectrometry, the acquisition of images in hundreds of contiguous spectral bands, such that for any pixel a complete reflectance spectrum can be acquired. Experience with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) has shown that, with proper attention paid to absolute calibration, it is possible to acquire apparent surface reflectance to 5% accuracy without any ground-based measurement. The data reduction incorporates in educated guess of the aerosol scattering, development of a precipitable water vapor map from the data and mapping of cirrus clouds in the 1.38 micrometer band. This is not possible with TM. The pixel size in images of the earth plays and important role in the type and quality of information that can be derived. Less understood is the coupling between spatial and spectral resolution in a sensor. Recent work has shown that in processing the data to derive the relative abundance of materials in a pixel, also known is unmixing, the pixel size is an important parameter. A variance in the relative abundance of materials among the pixels is necessary to be able to derive the endmembers or pure material constituent spectra. In most cases, the 1 km pixel size for the Earth Observing System Moderate Resolution Imaging Spectroradiometer (MODIS) instrument is too large to meet the variance criterion. A pointable high spatial and spectral resolution imaging spectrometer in orbit will be necessary to make the major next step in our understanding of the solid earth surface and its changing face.

  16. Contrast performance modeling of broadband reflective imaging systems with hypothetical tunable filter fore-optics

    NASA Astrophysics Data System (ADS)

    Hodgkin, Van A.

    2015-05-01

    Most mass-produced, commercially available and fielded military reflective imaging systems operate across broad swaths of the visible, near infrared (NIR), and shortwave infrared (SWIR) wavebands without any spectral selectivity within those wavebands. In applications that employ these systems, it is not uncommon to be imaging a scene in which the image contrasts between the objects of interest, i.e., the targets, and the objects of little or no interest, i.e., the backgrounds, are sufficiently low to make target discrimination difficult or uncertain. This can occur even when the spectral distribution of the target and background reflectivity across the given waveband differ significantly from each other, because the fundamental components of broadband image contrast are the spectral integrals of the target and background signatures. Spectral integration by the detectors tends to smooth out any differences. Hyperspectral imaging is one approach to preserving, and thus highlighting, spectral differences across the scene, even when the waveband integrated signatures would be about the same, but it is an expensive, complex, noncompact, and untimely solution. This paper documents a study of how the capability to selectively customize the spectral width and center wavelength with a hypothetical tunable fore-optic filter would allow a broadband reflective imaging sensor to optimize image contrast as a function of scene content and ambient illumination.

  17. Multimodal device for assessment of skin malformations

    NASA Astrophysics Data System (ADS)

    Bekina, A.; Garancis, V.; Rubins, U.; Spigulis, J.; Valeine, L.; Berzina, A.

    2013-11-01

    A variety of multi-spectral imaging devices is commercially available and used for skin diagnostics and monitoring; however, an alternative cost-efficient device can provide an advanced spectral analysis of skin. A compact multimodal device for diagnosis of pigmented skin lesions was developed and tested. A polarized LED light source illuminates the skin surface at four different wavelengths - blue (450 nm), green (545 nm), red (660 nm) and infrared (940 nm). Spectra of reflected light from the 25 mm wide skin spot are imaged by a CMOS sensor. Four spectral images are obtained for mapping of the main skin chromophores. The specific chromophore distribution differences between different skin malformations were analyzed and information of subcutaneous structures was consecutively extracted.

  18. Imaging spectrometry of the Earth and other solar system bodies

    NASA Technical Reports Server (NTRS)

    Vane, Gregg

    1993-01-01

    Imaging spectrometry is a relatively new tool for remote sensing of the Earth and other bodies of the solar system. The technique dates back to the late 1970's and early 1980's. It is a natural extension of the earlier multi-spectral imagers developed for remote sensing that acquire images in a few, usually broad spectral bands. Imaging spectrometers combine aspects of classical spectrometers and imaging systems, making it possible to acquire literally hundreds of images of an object, each image in a separate, narrow spectral band. It is thus possible to perform spectroscopy on a pixel-by-pixel basis with the data acquired with an imaging spectrometer. Two imaging spectrometers have flown in space and several others are planned for future Earth and planetary missions. The French-built Phobos Infrared Spectrometer (ISM) was part of the payload of the Soviet Mars mission in 1988, and the JPL-built Near Infrared Mapping Spectrometer (NIMS) is currently en route to Jupiter aboard the Galileo spacecraft. Several airborne imaging spectrometers have been built in the past decade including the JPL-built Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) which is the only such sensor that covers the full solar reflected portion of the spectrum in narrow, contiguous spectral bands. NASA plans two imaging spectrometers for its Earth Observing System, the Moderate and the High Resolution Imaging Spectrometers (MODIS and HIRIS). A brief overview of the applications of imaging spectrometry to Earth science will be presented to illustrate the value of the tool to remote sensing and indicate the types of measurements that are required. The system design for AVIRS and a planetary imaging spectrometer will be presented to illustrate the engineering considerations and challenges that must be met in building such instruments. Several key sensor technology areas will be discussed in which miniaturization and/or enhanced performance through micromachining and nanofabrication may allow smaller, more robust, and more capable imaging spectrometers to be built in the future.

  19. Application and evaluation of ISVR method in QuickBird image fusion

    NASA Astrophysics Data System (ADS)

    Cheng, Bo; Song, Xiaolu

    2014-05-01

    QuickBird satellite images are widely used in many fields, and applications have put forward high requirements for the integration of the spatial information and spectral information of the imagery. A fusion method for high resolution remote sensing images based on ISVR is identified in this study. The core principle of ISVS is taking the advantage of radicalization targeting to remove the effect of different gain and error of satellites' sensors. Transformed from DN to radiance, the multi-spectral image's energy is used to simulate the panchromatic band. The linear regression analysis is carried through the simulation process to find a new synthetically panchromatic image, which is highly linearly correlated to the original panchromatic image. In order to evaluate, test and compare the algorithm results, this paper used ISVR and other two different fusion methods to give a comparative study of the spatial information and spectral information, taking the average gradient and the correlation coefficient as an indicator. Experiments showed that this method could significantly improve the quality of fused image, especially in preserving spectral information, to maximize the spectral information of original multispectral images, while maintaining abundant spatial information.

  20. Photographic films as remote sensors for measuring albedos of terrestrial surfaces

    NASA Technical Reports Server (NTRS)

    Pease, S. R.; Pease, R. W.

    1972-01-01

    To test the feasibility of remotely measuring the albedos of terrestrial surfaces from photographic images, an inquiry was carried out at ground level using several representative common surface targets. Problems of making such measurements with a spectrally selective sensor, such as photographic film, have been compared to previous work utilizing silicon cells. Two photographic approaches have been developed: a multispectral method which utilizes two or three photographic images made through conventional multispectral filters and a single shot method which utilizes the broad spectral sensitivity of black and white infrared film. Sensitometry related to the methods substitutes a Log Albedo scale for the conventional Log Exposure for creating characteristic curves. Certain constraints caused by illumination goemetry are discussed.

  1. Comparison of fractal dimensions based on segmented NDVI fields obtained from different remote sensors.

    NASA Astrophysics Data System (ADS)

    Alonso, C.; Benito, R. M.; Tarquis, A. M.

    2012-04-01

    Satellite image data have become an important source of information for monitoring vegetation and mapping land cover at several scales. Beside this, the distribution and phenology of vegetation is largely associated with climate, terrain characteristics and human activity. Various vegetation indices have been developed for qualitative and quantitative assessment of vegetation using remote spectral measurements. In particular, sensors with spectral bands in the red (RED) and near-infrared (NIR) lend themselves well to vegetation monitoring and based on them [(NIR - RED) / (NIR + RED)] Normalized Difference Vegetation Index (NDVI) has been widespread used. Given that the characteristics of spectral bands in RED and NIR vary distinctly from sensor to sensor, NDVI values based on data from different instruments will not be directly comparable. The spatial resolution also varies significantly between sensors, as well as within a given scene in the case of wide-angle and oblique sensors. As a result, NDVI values will vary according to combinations of the heterogeneity and scale of terrestrial surfaces and pixel footprint sizes. Therefore, the question arises as to the impact of differences in spectral and spatial resolutions on vegetation indices like the NDVI. The aim of this study is to establish a comparison between two different sensors in their NDVI values at different spatial resolutions. Scaling analysis and modeling techniques are increasingly understood to be the result of nonlinear dynamic mechanisms repeating scale after scale from large to small scales leading to non-classical resolution dependencies. In the remote sensing framework the main characteristic of sensors images is the high local variability in their values. This variability is a consequence of the increase in spatial and radiometric resolution that implies an increase in complexity that it is necessary to characterize. Fractal and multifractal techniques has been proven to be useful to extract such complexities from remote sensing images and will applied in this study to see the scaling behavior for each sensor in generalized fractal dimensions. The studied area is located in the provinces of Caceres and Salamanca (east of Iberia Peninsula) with an extension of 32 x 32 km2. The altitude in the area varies from 1,560 to 320 m, comprising natural vegetation in the mountain area (forest and bushes) and agricultural crops in the valleys. Scaling analysis were applied to Landsat-5 and MODIS TERRA to the normalized derived vegetation index (NDVI) on the same region with one day of difference, 13 and 12 of July 2003 respectively. From these images the area of interest was selected obtaining 1024 x 1024 pixels for Landsat image and 128 x 128 pixels for MODIS image. This implies that the resolution for MODIS is 250x250 m. and for Landsat is 30x30 m. From the reflectance data obtained from NIR and RED bands, NDVI was calculated for each image focusing this study on 0.2 to 0.5 ranges of values. Once that both NDVI fields were obtained several fractal dimensions were estimated in each one segmenting the values in 0.20-0.25, 0.25-0.30 and so on to rich 0.45-0.50. In all the scaling analysis the scale size length was expressed in meters, and not in pixels, to make the comparison between both sensors possible. Results are discussed. Acknowledgements This work has been supported by the Spanish MEC under Projects No. AGL2010-21501/AGR, MTM2009-14621 and i-MATH No. CSD2006-00032

  2. Concept and integration of an on-line quasi-operational airborne hyperspectral remote sensing system

    NASA Astrophysics Data System (ADS)

    Schilling, Hendrik; Lenz, Andreas; Gross, Wolfgang; Perpeet, Dominik; Wuttke, Sebastian; Middelmann, Wolfgang

    2013-10-01

    Modern mission characteristics require the use of advanced imaging sensors in reconnaissance. In particular, high spatial and high spectral resolution imaging provides promising data for many tasks such as classification and detecting objects of military relevance, such as camouflaged units or improvised explosive devices (IEDs). Especially in asymmetric warfare with highly mobile forces, intelligence, surveillance and reconnaissance (ISR) needs to be available close to real-time. This demands the use of unmanned aerial vehicles (UAVs) in combination with downlink capability. The system described in this contribution is integrated in a wing pod for ease of installation and calibration. It is designed for the real-time acquisition and analysis of hyperspectral data. The main component is a Specim AISA Eagle II hyperspectral sensor, covering the visible and near-infrared (VNIR) spectral range with a spectral resolution up to 1.2 nm and 1024 pixel across track, leading to a ground sampling distance below 1 m at typical altitudes. The push broom characteristic of the hyperspectral sensor demands an inertial navigation system (INS) for rectification and georeferencing of the image data. Additional sensors are a high resolution RGB (HR-RGB) frame camera and a thermal imaging camera. For on-line application, the data is preselected, compressed and transmitted to the ground control station (GCS) by an existing system in a second wing pod. The final result after data processing in the GCS is a hyperspectral orthorectified GeoTIFF, which is filed in the ERDAS APOLLO geographical information system. APOLLO allows remote access to the data and offers web-based analysis tools. The system is quasi-operational and was successfully tested in May 2013 in Bremerhaven, Germany.

  3. Absolute Calibration of Optical Satellite Sensors Using Libya 4 Pseudo Invariant Calibration Site

    NASA Technical Reports Server (NTRS)

    Mishra, Nischal; Helder, Dennis; Angal, Amit; Choi, Jason; Xiong, Xiaoxiong

    2014-01-01

    The objective of this paper is to report the improvements in an empirical absolute calibration model developed at South Dakota State University using Libya 4 (+28.55 deg, +23.39 deg) pseudo invariant calibration site (PICS). The approach was based on use of the Terra MODIS as the radiometer to develop an absolute calibration model for the spectral channels covered by this instrument from visible to shortwave infrared. Earth Observing One (EO-1) Hyperion, with a spectral resolution of 10 nm, was used to extend the model to cover visible and near-infrared regions. A simple Bidirectional Reflectance Distribution function (BRDF) model was generated using Terra Moderate Resolution Imaging Spectroradiometer (MODIS) observations over Libya 4 and the resulting model was validated with nadir data acquired from satellite sensors such as Aqua MODIS and Landsat 7 (L7) Enhanced Thematic Mapper (ETM+). The improvements in the absolute calibration model to account for the BRDF due to off-nadir measurements and annual variations in the atmosphere are summarized. BRDF models due to off-nadir viewing angles have been derived using the measurements from EO-1 Hyperion. In addition to L7 ETM+, measurements from other sensors such as Aqua MODIS, UK-2 Disaster Monitoring Constellation (DMC), ENVISAT Medium Resolution Imaging Spectrometer (MERIS) and Operational Land Imager (OLI) onboard Landsat 8 (L8), which was launched in February 2013, were employed to validate the model. These satellite sensors differ in terms of the width of their spectral bandpasses, overpass time, off-nadir-viewing capabilities, spatial resolution and temporal revisit time, etc. The results demonstrate that the proposed empirical calibration model has accuracy of the order of 3% with an uncertainty of about 2% for the sensors used in the study.

  4. [On-Orbit Multispectral Sensor Characterization Based on Spectral Tarps].

    PubMed

    Li, Xin; Zhang, Li-ming; Chen, Hong-yao; Xu, Wei-wei

    2016-03-01

    The multispectral remote sensing technology has been a primary means in the research of biomass monitoring, climate change, disaster prediction and etc. The spectral sensitivity is essential in the quantitative analysis of remote sensing data. When the sensor is running in the space, it will be influenced by cosmic radiation, severe change of temperature, chemical molecular contamination, cosmic dust and etc. As a result, the spectral sensitivity will degrade by time, which has great implication on the accuracy and consistency of the physical measurements. This paper presents a characterization method of the degradation based on man-made spectral targets. Firstly, a degradation model is established in the paper. Then, combined with equivalent reflectance of spectral targets measured and inverted from image, the degradation characterization can be achieved. The simulation and on orbit experiment results showed that, using the proposed method, the change of center wavelength and band width can be monotored. The method proposed in the paper has great significance for improving the accuracy of long time series remote sensing data product and comprehensive utilization level of multi sensor data products.

  5. Apollo-NADP(+): a spectrally tunable family of genetically encoded sensors for NADP(+).

    PubMed

    Cameron, William D; Bui, Cindy V; Hutchinson, Ashley; Loppnau, Peter; Gräslund, Susanne; Rocheleau, Jonathan V

    2016-04-01

    NADPH-dependent antioxidant pathways have a critical role in scavenging hydrogen peroxide (H2O2) produced by oxidative phosphorylation. Inadequate scavenging results in H2O2 accumulation and can cause disease. To measure NADPH/NADP(+) redox states, we explored genetically encoded sensors based on steady-state fluorescence anisotropy due to FRET (fluorescence resonance energy transfer) between homologous fluorescent proteins (homoFRET); we refer to these sensors as Apollo sensors. We created an Apollo sensor for NADP(+) (Apollo-NADP(+)) that exploits NADP(+)-dependent homodimerization of enzymatically inactive glucose-6-phosphate dehydrogenase (G6PD). This sensor is reversible, responsive to glucose-stimulated metabolism and spectrally tunable for compatibility with many other sensors. We used Apollo-NADP(+) to study beta cells responding to oxidative stress and demonstrated that NADPH is significantly depleted before H2O2 accumulation by imaging a Cerulean-tagged version of Apollo-NADP(+) with the H2O2 sensor HyPer.

  6. Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis

    PubMed Central

    Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin

    2016-01-01

    This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human–machine interaction. PMID:27367687

  7. Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis.

    PubMed

    Procházka, Aleš; Schätz, Martin; Vyšata, Oldřich; Vališ, Martin

    2016-06-28

    This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human-machine interaction.

  8. Using hyperspectral remote sensing for land cover classification

    NASA Astrophysics Data System (ADS)

    Zhang, Wendy W.; Sriharan, Shobha

    2005-01-01

    This project used hyperspectral data set to classify land cover using remote sensing techniques. Many different earth-sensing satellites, with diverse sensors mounted on sophisticated platforms, are currently in earth orbit. These sensors are designed to cover a wide range of the electromagnetic spectrum and are generating enormous amounts of data that must be processed, stored, and made available to the user community. The Airborne Visible-Infrared Imaging Spectrometer (AVIRIS) collects data in 224 bands that are approximately 9.6 nm wide in contiguous bands between 0.40 and 2.45 mm. Hyperspectral sensors acquire images in many, very narrow, contiguous spectral bands throughout the visible, near-IR, and thermal IR portions of the spectrum. The unsupervised image classification procedure automatically categorizes the pixels in an image into land cover classes or themes. Experiments on using hyperspectral remote sensing for land cover classification were conducted during the 2003 and 2004 NASA Summer Faculty Fellowship Program at Stennis Space Center. Research Systems Inc.'s (RSI) ENVI software package was used in this application framework. In this application, emphasis was placed on: (1) Spectrally oriented classification procedures for land cover mapping, particularly, the supervised surface classification using AVIRIS data; and (2) Identifying data endmembers.

  9. Comparative Analysis of EO-1 ALI and Hyperion, and Landsat ETM+ Data for Mapping Forest Crown Closure and Leaf Area Index

    PubMed Central

    Pu, Ruiliang; Gong, Peng; Yu, Qian

    2008-01-01

    In this study, a comparative analysis of capabilities of three sensors for mapping forest crown closure (CC) and leaf area index (LAI) was conducted. The three sensors are Hyperspectral Imager (Hyperion) and Advanced Land Imager (ALI) onboard EO-1 satellite and Landsat-7 Enhanced Thematic Mapper Plus (ETM+). A total of 38 mixed coniferous forest CC and 38 LAI measurements were collected at Blodgett Forest Research Station, University of California at Berkeley, USA. The analysis method consists of (1) extracting spectral vegetation indices (VIs), spectral texture information and maximum noise fractions (MNFs), (2) establishing multivariate prediction models, (3) predicting and mapping pixel-based CC and LAI values, and (4) validating the mapped CC and LAI results with field validated photo-interpreted CC and LAI values. The experimental results indicate that the Hyperion data are the most effective for mapping forest CC and LAI (CC mapped accuracy (MA) = 76.0%, LAI MA = 74.7%), followed by ALI data (CC MA = 74.5%, LAI MA = 70.7%), with ETM+ data results being least effective (CC MA = 71.1%, LAI MA = 63.4%). This analysis demonstrates that the Hyperion sensor outperforms the other two sensors: ALI and ETM+. This is because of its high spectral resolution with rich subtle spectral information, of its short-wave infrared data for constructing optimal VIs that are slightly affected by the atmosphere, and of its more available MNFs than the other two sensors to be selected for establishing prediction models. Compared to ETM+ data, ALI data are better for mapping forest CC and LAI due to ALI data with more bands and higher signal-to-noise ratios than those of ETM+ data. PMID:27879906

  10. Comparative Analysis of EO-1 ALI and Hyperion, and Landsat ETM+ Data for Mapping Forest Crown Closure and Leaf Area Index.

    PubMed

    Pu, Ruiliang; Gong, Peng; Yu, Qian

    2008-06-06

    In this study, a comparative analysis of capabilities of three sensors for mapping forest crown closure (CC) and leaf area index (LAI) was conducted. The three sensors are Hyperspectral Imager (Hyperion) and Advanced Land Imager (ALI) onboard EO-1 satellite and Landsat-7 Enhanced Thematic Mapper Plus (ETM+). A total of 38 mixed coniferous forest CC and 38 LAI measurements were collected at Blodgett Forest Research Station, University of California at Berkeley, USA. The analysis method consists of (1) extracting spectral vegetation indices (VIs), spectral texture information and maximum noise fractions (MNFs), (2) establishing multivariate prediction models, (3) predicting and mapping pixel-based CC and LAI values, and (4) validating the mapped CC and LAI results with field validated photo-interpreted CC and LAI values. The experimental results indicate that the Hyperion data are the most effective for mapping forest CC and LAI (CC mapped accuracy (MA) = 76.0%, LAI MA = 74.7%), followed by ALI data (CC MA = 74.5%, LAI MA = 70.7%), with ETM+ data results being least effective (CC MA = 71.1%, LAI MA = 63.4%). This analysis demonstrates that the Hyperion sensor outperforms the other two sensors: ALI and ETM+. This is because of its high spectral resolution with rich subtle spectral information, of its short-wave infrared data for constructing optimal VIs that are slightly affected by the atmosphere, and of its more available MNFs than the other two sensors to be selected for establishing prediction models. Compared to ETM+ data, ALI data are better for mapping forest CC and LAI due to ALI data with more bands and higher signal-to-noise ratios than those of ETM+ data.

  11. GIFTS SM EDU Radiometric and Spectral Calibrations

    NASA Technical Reports Server (NTRS)

    Tian, J.; Reisse, R. a.; Johnson, D. G.; Gazarik, J. J.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiance using a Fourier transform spectrometer (FTS). The GIFTS instrument gathers measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the processing algorithms involved in the calibration. The calibration procedures can be subdivided into three categories: the pre-calibration stage, the calibration stage, and finally, the post-calibration stage. Detailed derivations for each stage are presented in this paper.

  12. Demosaicking for full motion video 9-band SWIR sensor

    NASA Astrophysics Data System (ADS)

    Kanaev, Andrey V.; Rawhouser, Marjorie; Kutteruf, Mary R.; Yetzbacher, Michael K.; DePrenger, Michael J.; Novak, Kyle M.; Miller, Corey A.; Miller, Christopher W.

    2014-05-01

    Short wave infrared (SWIR) spectral imaging systems are vital for Intelligence, Surveillance, and Reconnaissance (ISR) applications because of their abilities to autonomously detect targets and classify materials. Typically the spectral imagers are incapable of providing Full Motion Video (FMV) because of their reliance on line scanning. We enable FMV capability for a SWIR multi-spectral camera by creating a repeating pattern of 3x3 spectral filters on a staring focal plane array (FPA). In this paper we present the imagery from an FMV SWIR camera with nine discrete bands and discuss image processing algorithms necessary for its operation. The main task of image processing in this case is demosaicking of the spectral bands i.e. reconstructing full spectral images with original FPA resolution from spatially subsampled and incomplete spectral data acquired with the choice of filter array pattern. To the best of author's knowledge, the demosaicking algorithms for nine or more equally sampled bands have not been reported before. Moreover all existing algorithms developed for demosaicking visible color filter arrays with less than nine colors assume either certain relationship between the visible colors, which are not valid for SWIR imaging, or presence of one color band with higher sampling rate compared to the rest of the bands, which does not conform to our spectral filter pattern. We will discuss and present results for two novel approaches to demosaicking: interpolation using multi-band edge information and application of multi-frame super-resolution to a single frame resolution enhancement of multi-spectral spatially multiplexed images.

  13. Extracting and compensating dispersion mismatch in ultrahigh-resolution Fourier domain OCT imaging of the retina

    PubMed Central

    Choi, WooJhon; Baumann, Bernhard; Swanson, Eric A.; Fujimoto, James G.

    2012-01-01

    We present a numerical approach to extract the dispersion mismatch in ultrahigh-resolution Fourier domain optical coherence tomography (OCT) imaging of the retina. The method draws upon an analogy with a Shack-Hartmann wavefront sensor. By exploiting mathematical similarities between the expressions for aberration in optical imaging and dispersion mismatch in spectral / Fourier domain OCT, Shack-Hartmann principles can be extended from the two-dimensional paraxial wavevector space (or the x-y plane in the spatial domain) to the one-dimensional wavenumber space (or the z-axis in the spatial domain). For OCT imaging of the retina, different retinal layers, such as the retinal nerve fiber layer (RNFL), the photoreceptor inner and outer segment junction (IS/OS), or all the retinal layers near the retinal pigment epithelium (RPE) can be used as point source beacons in the axial direction, analogous to point source beacons used in conventional two-dimensional Shack-Hartman wavefront sensors for aberration characterization. Subtleties regarding speckle phenomena in optical imaging, which affect the Shack-Hartmann wavefront sensor used in adaptive optics, also occur analogously in this application. Using this approach and carefully suppressing speckle, the dispersion mismatch in spectral / Fourier domain OCT retinal imaging can be successfully extracted numerically and used for numerical dispersion compensation to generate sharper, ultrahigh-resolution OCT images. PMID:23187353

  14. Solar Spectral Irradiance at 782 nm as Measured by the SES Sensor Onboard Picard

    NASA Astrophysics Data System (ADS)

    Meftah, M.; Hauchecorne, A.; Irbah, A.; Cessateur, G.; Bekki, S.; Damé, L.; Bolsée, D.; Pereira, N.

    2016-04-01

    Picard is a satellite dedicated to the simultaneous measurement of the total and solar spectral irradiance, the solar diameter, the solar shape, and to the Sun's interior through the methods of helioseismology. The satellite was launched on June 15, 2010, and pursued its data acquisitions until March 2014. A Sun Ecartometry Sensor (SES) was developed to provide the stringent pointing requirements of the satellite. The SES sensor produced an image of the Sun at 782 ± 2.5 nm. From the SES data, we obtained a new time series of the solar spectral irradiance at 782 nm from 2010 to 2014. During this period of Solar Cycle 24, the amplitude of the changes has been of the order of ± 0.08 %, corresponding to a range of about 2× 10^{-3} W m^{-2} nm^{-1}. SES observations provided a qualitatively consistent evolution of the solar spectral irradiance variability at 782 nm. SES data show similar amplitude variations with the semi-empirical model Spectral And Total Irradiance REconstruction for the Satellite era (SATIRE-S), whereas the Spectral Irradiance Monitor instrument (SIM) onboard the SOlar Radiation and Climate Experiment satellite (SORCE) highlights higher amplitudes.

  15. Generating Vegetation Leaf Area Index Earth System Data Record from Multiple Sensors. Part 1; Theory

    NASA Technical Reports Server (NTRS)

    Ganguly, Sangram; Schull, Mitchell A.; Samanta, Arindam; Shabanov, Nikolay V.; Milesi, Cristina; Nemani, Ramakrishna R.; Knyazikhin, Yuri; Myneni, Ranga B.

    2008-01-01

    The generation of multi-decade long Earth System Data Records (ESDRs) of Leaf Area Index (LAI) and Fraction of Photosynthetically Active Radiation absorbed by vegetation (FPAR) from remote sensing measurements of multiple sensors is key to monitoring long-term changes in vegetation due to natural and anthropogenic influences. Challenges in developing such ESDRs include problems in remote sensing science (modeling of variability in global vegetation, scaling, atmospheric correction) and sensor hardware (differences in spatial resolution, spectral bands, calibration, and information content). In this paper, we develop a physically based approach for deriving LAI and FPAR products from the Advanced Very High Resolution Radiometer (AVHRR) data that are of comparable quality to the Moderate resolution Imaging Spectroradiometer (MODIS) LAI and FPAR products, thus realizing the objective of producing a long (multi-decadal) time series of these products. The approach is based on the radiative transfer theory of canopy spectral invariants which facilitates parameterization of the canopy spectral bidirectional reflectance factor (BRF). The methodology permits decoupling of the structural and radiometric components and obeys the energy conservation law. The approach is applicable to any optical sensor, however, it requires selection of sensor-specific values of configurable parameters, namely, the single scattering albedo and data uncertainty. According to the theory of spectral invariants, the single scattering albedo is a function of the spatial scale, and thus, accounts for the variation in BRF with sensor spatial resolution. Likewise, the single scattering albedo accounts for the variation in spectral BRF with sensor bandwidths. The second adjustable parameter is data uncertainty, which accounts for varying information content of the remote sensing measurements, i.e., Normalized Difference Vegetation Index (NDVI, low information content), vs. spectral BRF (higher information content). Implementation of this approach indicates good consistency in LAI values retrieved from NDVI (AVHRRmode) and spectral BRF (MODIS-mode). Specific details of the implementation and evaluation of the derived products are detailed in the second part of this two-paper series.

  16. Advanced x-ray imaging spectrometer

    NASA Technical Reports Server (NTRS)

    Callas, John L. (Inventor); Soli, George A. (Inventor)

    1998-01-01

    An x-ray spectrometer that also provides images of an x-ray source. Coded aperture imaging techniques are used to provide high resolution images. Imaging position-sensitive x-ray sensors with good energy resolution are utilized to provide excellent spectroscopic performance. The system produces high resolution spectral images of the x-ray source which can be viewed in any one of a number of specific energy bands.

  17. Wave analysis of a plenoptic system and its applications

    NASA Astrophysics Data System (ADS)

    Shroff, Sapna A.; Berkner, Kathrin

    2013-03-01

    Traditional imaging systems directly image a 2D object plane on to the sensor. Plenoptic imaging systems contain a lenslet array at the conventional image plane and a sensor at the back focal plane of the lenslet array. In this configuration the data captured at the sensor is not a direct image of the object. Each lenslet effectively images the aperture of the main imaging lens at the sensor. Therefore the sensor data retains angular light-field information which can be used for a posteriori digital computation of multi-angle images and axially refocused images. If a filter array, containing spectral filters or neutral density or polarization filters, is placed at the pupil aperture of the main imaging lens, then each lenslet images the filters on to the sensor. This enables the digital separation of multiple filter modalities giving single snapshot, multi-modal images. Due to the diversity of potential applications of plenoptic systems, their investigation is increasing. As the application space moves towards microscopes and other complex systems, and as pixel sizes become smaller, the consideration of diffraction effects in these systems becomes increasingly important. We discuss a plenoptic system and its wave propagation analysis for both coherent and incoherent imaging. We simulate a system response using our analysis and discuss various applications of the system response pertaining to plenoptic system design, implementation and calibration.

  18. Determination of Primary Spectral Bands for Remote Sensing of Aquatic Environments.

    PubMed

    Lee, ZhongPing; Carder, Kendall; Arnone, Robert; He, MingXia

    2007-12-20

    About 30 years ago, NASA launched the first ocean-color observing satellite:the Coastal Zone Color Scanner. CZCS had 5 bands in the visible-infrared domain with anobjective to detect changes of phytoplankton (measured by concentration of chlorophyll) inthe oceans. Twenty years later, for the same objective but with advanced technology, theSea-viewing Wide Field-of-view Sensor (SeaWiFS, 7 bands), the Moderate-ResolutionImaging Spectrometer (MODIS, 8 bands), and the Medium Resolution ImagingSpectrometer (MERIS, 12 bands) were launched. The selection of the number of bands andtheir positions was based on experimental and theoretical results achieved before thedesign of these satellite sensors. Recently, Lee and Carder (2002) demonstrated that foradequate derivation of major properties (phytoplankton biomass, colored dissolved organicmatter, suspended sediments, and bottom properties) in both oceanic and coastalenvironments from observation of water color, it is better for a sensor to have ~15 bands inthe 400 - 800 nm range. In that study, however, it did not provide detailed analysesregarding the spectral locations of the 15 bands. Here, from nearly 400 hyperspectral (~ 3-nm resolution) measurements of remote-sensing reflectance (a measure of water color)taken in both coastal and oceanic waters covering both optically deep and optically shallowwaters, first- and second-order derivatives were calculated after interpolating themeasurements to 1-nm resolution. From these derivatives, the frequency of zero values foreach wavelength was accounted for, and the distribution spectrum of such frequencies wasobtained. Furthermore, the wavelengths that have the highest appearance of zeros wereidentified. Because these spectral locations indicate extrema (a local maximum orminimum) of the reflectance spectrum or inflections of the spectral curvature, placing the bands of a sensor at these wavelengths maximizes the potential of capturing (and then restoring) the spectral curve, and thus maximizes the potential of accurately deriving properties of the water column and/or bottom of various aquatic environments with a multi-band sensor.

  19. Evaluation on Radiometric Capability of Chinese Optical Satellite Sensors.

    PubMed

    Yang, Aixia; Zhong, Bo; Wu, Shanlong; Liu, Qinhuo

    2017-01-22

    The radiometric capability of on-orbit sensors should be updated on time due to changes induced by space environmental factors and instrument aging. Some sensors, such as Moderate Resolution Imaging Spectroradiometer (MODIS), have onboard calibrators, which enable real-time calibration. However, most Chinese remote sensing satellite sensors lack onboard calibrators. Their radiometric calibrations have been updated once a year based on a vicarious calibration procedure, which has affected the applications of the data. Therefore, a full evaluation of the sensors' radiometric capabilities is essential before quantitative applications can be made. In this study, a comprehensive procedure for evaluating the radiometric capability of several Chinese optical satellite sensors is proposed. In this procedure, long-term radiometric stability and radiometric accuracy are the two major indicators for radiometric evaluation. The radiometric temporal stability is analyzed by the tendency of long-term top-of-atmosphere (TOA) reflectance variation; the radiometric accuracy is determined by comparison with the TOA reflectance from MODIS after spectrally matching. Three Chinese sensors including the Charge-Coupled Device (CCD) camera onboard Huan Jing 1 satellite (HJ-1), as well as the Visible and Infrared Radiometer (VIRR) and Medium-Resolution Spectral Imager (MERSI) onboard the Feng Yun 3 satellite (FY-3) are evaluated in reflective bands based on this procedure. The results are reasonable, and thus can provide reliable reference for the sensors' application, and as such will promote the development of Chinese satellite data.

  20. Classification of Hyperspectral Data Based on Guided Filtering and Random Forest

    NASA Astrophysics Data System (ADS)

    Ma, H.; Feng, W.; Cao, X.; Wang, L.

    2017-09-01

    Hyperspectral images usually consist of more than one hundred spectral bands, which have potentials to provide rich spatial and spectral information. However, the application of hyperspectral data is still challengeable due to "the curse of dimensionality". In this context, many techniques, which aim to make full use of both the spatial and spectral information, are investigated. In order to preserve the geometrical information, meanwhile, with less spectral bands, we propose a novel method, which combines principal components analysis (PCA), guided image filtering and the random forest classifier (RF). In detail, PCA is firstly employed to reduce the dimension of spectral bands. Secondly, the guided image filtering technique is introduced to smooth land object, meanwhile preserving the edge of objects. Finally, the features are fed into RF classifier. To illustrate the effectiveness of the method, we carry out experiments over the popular Indian Pines data set, which is collected by Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor. By comparing the proposed method with the method of only using PCA or guided image filter, we find that effect of the proposed method is better.

  1. Color quality improvement of reconstructed images in color digital holography using speckle method and spectral estimation

    NASA Astrophysics Data System (ADS)

    Funamizu, Hideki; Onodera, Yusei; Aizu, Yoshihisa

    2018-05-01

    In this study, we report color quality improvement of reconstructed images in color digital holography using the speckle method and the spectral estimation. In this technique, an object is illuminated by a speckle field and then an object wave is produced, while a plane wave is used as a reference wave. For three wavelengths, the interference patterns of two coherent waves are recorded as digital holograms on an image sensor. Speckle fields are changed by moving a ground glass plate in an in-plane direction, and a number of holograms are acquired to average the reconstructed images. After the averaging process of images reconstructed from multiple holograms, we use the Wiener estimation method for obtaining spectral transmittance curves in reconstructed images. The color reproducibility in this method is demonstrated and evaluated using a Macbeth color chart film and staining cells of onion.

  2. A device for multimodal imaging of skin

    NASA Astrophysics Data System (ADS)

    Spigulis, Janis; Garancis, Valerijs; Rubins, Uldis; Zaharans, Eriks; Zaharans, Janis; Elste, Liene

    2013-03-01

    A compact prototype device for diagnostic imaging of skin has been developed and tested. Polarized LED light at several spectral regions is used for illumination, and round skin spot of diameter 30mm is imaged by a CMOS sensor via crossoriented polarizing filter. Four consecutive imaging series are performed: (1) RGB image at white LED illumination for revealing subcutaneous structures; (2) four spectral images at narrowband LED illumination (450nm, 540nm, 660nm, 940nm) for mapping of the main skin chromophores; (3) video-imaging under green LED illumination for mapping of skin blood perfusion; (4) autofluorescence video-imaging under UV (365nm) LED irradiation for mapping of the skin fluorophores. Design details of the device as well as preliminary results of clinical tests are presented.

  3. Ionizing doses and displacement damage testing of COTS CMOS imagers

    NASA Astrophysics Data System (ADS)

    Bernard, Frédéric; Petit, Sophie; Courtade, Sophie

    2017-11-01

    CMOS sensors begin to be a credible alternative to CCD sensors in some space missions. However, technology evolution of CMOS sensors is much faster than CCD one's. So a continuous technology evaluation is needed for CMOS imagers. Many of commercial COTS (Components Off The Shelf) CMOS sensors use organic filters, micro-lenses and non rad-hard technologies. An evaluation of the possibilities offered by such technologies is interesting before any custom development. This can be obtained by testing commercial COTS imagers. This article will present electro-optical performances evolution of off the shelves CMOS imagers after Ionizing Doses until 50kRad(Si) and Displacement Damage environment tests (until 1011 p/cm2 at 50 MeV). Dark current level and non uniformity evolutions are compared and discussed. Relative spectral response measurement and associated evolution with irradiation will also be presented and discussed. Tests have been performed on CNES detection benches.

  4. Image-Based Airborne Sensors: A Combined Approach for Spectral Signatures Classification through Deterministic Simulated Annealing

    PubMed Central

    Guijarro, María; Pajares, Gonzalo; Herrera, P. Javier

    2009-01-01

    The increasing technology of high-resolution image airborne sensors, including those on board Unmanned Aerial Vehicles, demands automatic solutions for processing, either on-line or off-line, the huge amountds of image data sensed during the flights. The classification of natural spectral signatures in images is one potential application. The actual tendency in classification is oriented towards the combination of simple classifiers. In this paper we propose a combined strategy based on the Deterministic Simulated Annealing (DSA) framework. The simple classifiers used are the well tested supervised parametric Bayesian estimator and the Fuzzy Clustering. The DSA is an optimization approach, which minimizes an energy function. The main contribution of DSA is its ability to avoid local minima during the optimization process thanks to the annealing scheme. It outperforms simple classifiers used for the combination and some combined strategies, including a scheme based on the fuzzy cognitive maps and an optimization approach based on the Hopfield neural network paradigm. PMID:22399989

  5. Multispectral and polarimetric photodetection using a plasmonic metasurface

    NASA Astrophysics Data System (ADS)

    Pelzman, Charles; Cho, Sang-Yeon

    2018-01-01

    We present a metasurface-integrated Si 2-D CMOS sensor array for multispectral and polarimetric photodetection applications. The demonstrated sensor is based on the polarization selective extraordinary optical transmission from periodic subwavelength nanostructures, acting as artificial atoms, known as meta-atoms. The meta-atoms were created by patterning periodic rectangular apertures that support optical resonance at the designed spectral bands. By spatially separating meta-atom clusters with different lattice constants and orientations, the demonstrated metasurface can convert the polarization and spectral information of an optical input into a 2-D intensity pattern. As a proof-of-concept experiment, we measured the linear components of the Stokes parameters directly from captured images using a CMOS camera at four spectral bands. Compared to existing multispectral polarimetric sensors, the demonstrated metasurface-integrated CMOS system is compact and does not require any moving components, offering great potential for advanced photodetection applications.

  6. pHlash: a new genetically encoded and ratiometric luminescence sensor of intracellular pH.

    PubMed

    Zhang, Yunfei; Xie, Qiguang; Robertson, J Brian; Johnson, Carl Hirschie

    2012-01-01

    We report the development of a genetically encodable and ratiometic pH probe named "pHlash" that utilizes Bioluminescence Resonance Energy Transfer (BRET) rather than fluorescence excitation. The pHlash sensor-composed of a donor luciferase that is genetically fused to a Venus fluorophore-exhibits pH dependence of its spectral emission in vitro. When expressed in either yeast or mammalian cells, pHlash reports basal pH and cytosolic acidification in vivo. Its spectral ratio response is H(+) specific; neither Ca(++), Mg(++), Na(+), nor K(+) changes the spectral form of its luminescence emission. Moreover, it can be used to image pH in single cells. This is the first BRET-based sensor of H(+) ions, and it should allow the approximation of pH in cytosolic and organellar compartments in applications where current pH probes are inadequate.

  7. STARR: shortwave-targeted agile Raman robot for the detection and identification of emplaced explosives

    NASA Astrophysics Data System (ADS)

    Gomer, Nathaniel R.; Gardner, Charles W.

    2014-05-01

    In order to combat the threat of emplaced explosives (land mines, etc.), ChemImage Sensor Systems (CISS) has developed a multi-sensor, robot mounted sensor capable of identification and confirmation of potential threats. The system, known as STARR (Shortwave-infrared Targeted Agile Raman Robot), utilizes shortwave infrared spectroscopy for the identification of potential threats, combined with a visible short-range standoff Raman hyperspectral imaging (HSI) system for material confirmation. The entire system is mounted onto a Talon UGV (Unmanned Ground Vehicle), giving the sensor an increased area search rate and reducing the risk of injury to the operator. The Raman HSI system utilizes a fiber array spectral translator (FAST) for the acquisition of high quality Raman chemical images, allowing for increased sensitivity and improved specificity. An overview of the design and operation of the system will be presented, along with initial detection results of the fusion sensor.

  8. High performance multi-spectral interrogation for surface plasmon resonance imaging sensors.

    PubMed

    Sereda, A; Moreau, J; Canva, M; Maillart, E

    2014-04-15

    Surface plasmon resonance (SPR) sensing has proven to be a valuable tool in the field of surface interactions characterization, especially for biomedical applications where label-free techniques are of particular interest. In order to approach the theoretical resolution limit, most SPR-based systems have turned to either angular or spectral interrogation modes, which both offer very accurate real-time measurements, but at the expense of the 2-dimensional imaging capability, therefore decreasing the data throughput. In this article, we show numerically and experimentally how to combine the multi-spectral interrogation technique with 2D-imaging, while finding an optimum in terms of resolution, accuracy, acquisition speed and reduction in data dispersion with respect to the classical reflectivity interrogation mode. This multi-spectral interrogation methodology is based on a robust five parameter fitting of the spectral reflectivity curve which enables monitoring of the reflectivity spectral shift with a resolution of the order of ten picometers, and using only five wavelength measurements per point. In fine, such multi-spectral based plasmonic imaging system allows biomolecular interaction monitoring in a linear regime independently of variations of buffer optical index, which is illustrated on a DNA-DNA model case. © 2013 Elsevier B.V. All rights reserved.

  9. An advanced wide area chemical sensor testbed

    NASA Astrophysics Data System (ADS)

    Seeley, Juliette A.; Kelly, Michael; Wack, Edward; Ryan-Howard, Danette; Weidler, Darryl; O'Brien, Peter; Colonero, Curtis; Lakness, John; Patel, Paras

    2005-11-01

    In order to meet current and emerging needs for remote passive standoff detection of chemical agent threats, MIT Lincoln Laboratory has developed a Wide Area Chemical Sensor (WACS) testbed. A design study helped define the initial concept, guided by current standoff sensor mission requirements. Several variants of this initial design have since been proposed to target other applications within the defense community. The design relies on several enabling technologies required for successful implementation. The primary spectral component is a Wedged Interferometric Spectrometer (WIS) capable of imaging in the LWIR with spectral resolutions as narrow as 4 cm-1. A novel scanning optic will enhance the ability of this sensor to scan over large areas of concern with a compact, rugged design. In this paper, we shall discuss our design, development, and calibration process for this system as well as recent testbed measurements that validate the sensor concept.

  10. LANDSAT-4 Scientific Characterization: Early Results Symposium

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Radiometric calibration, geometric accuracy, spatial and spectral resolution, and image quality are examined for the thematic mapper and the multispectral band scanner on LANDSAT 4. Sensor performance is evaluated.

  11. Dualband infrared imaging spectrometer: observations of the moon

    NASA Astrophysics Data System (ADS)

    LeVan, Paul D.; Beecken, Brian P.; Lindh, Cory

    2008-08-01

    We reported previously on full-disk observations of the sun through a layer of black polymer, used to protect the entrance aperture of a novel dualband spectrometer while transmitting discrete wavelength regions in the MWIR & LWIR1. More recently, the spectrometer was used to assess the accuracy of recovery of unknown blackbody temperatures2. Here, we briefly describe MWIR observations of the full Moon made in Jan 2008. As was the case for the solar observations, the Moon was allowed to drift across the spectrometer slit by Earth's rotation. A detailed sensor calibration performed prior to the observations accounts for sensor non-uniformities; the spectral images of the Moon therefore include atmospheric transmission features. Our plans are to repeat the observations at liquid helium temperatures, thereby allowing both MWIR & LWIR spectral coverage.

  12. Solfatara Crater Seen Through Hyperspectral Dais Sensor Data In The Tir Region: Temperature Map and Spectral Emissivity Image For Mineralogical Species Identification.

    NASA Astrophysics Data System (ADS)

    Merucci, L.; Buongiorno, M. F.; Teggi, S.; Bogliolo, M. P.

    Temperature map and spectral emissivity have been retrieved by means of the TIR re- gion data collected by the DAIS airborne hyperspectral sensor on the Solfatara, Campi Flegrei, Italy, during the July 27, 1997 flight. During the 7915 DAIS flight a contem- poraneous field campaign was carried out in order to measure the surface temperature in the Solfatara crater and a radiosonde has been launched to measure the local at- mospheric profile. A normalized vegetation index filter has been used to select in the Solfatara crater scene the areas not covered by vegetation upon which the temperature and emissivity retrieval algorithms have been applied. The atmospheric contribute has been estimated by means of the MODTRAN radiative transfer code. The temperature map has been finally validated with the field measurements and the spectral emissivity image has been compared with the spectra available for the mineralogical species that cover the Solfatara crater.

  13. Novel instrumentation of multispectral imaging technology for detecting tissue abnormity

    NASA Astrophysics Data System (ADS)

    Yi, Dingrong; Kong, Linghua

    2012-10-01

    Multispectral imaging is becoming a powerful tool in a wide range of biological and clinical studies by adding spectral, spatial and temporal dimensions to visualize tissue abnormity and the underlying biological processes. A conventional spectral imaging system includes two physically separated major components: a band-passing selection device (such as liquid crystal tunable filter and diffraction grating) and a scientific-grade monochromatic camera, and is expensive and bulky. Recently micro-arrayed narrow-band optical mosaic filter was invented and successfully fabricated to reduce the size and cost of multispectral imaging devices in order to meet the clinical requirement for medical diagnostic imaging applications. However the challenging issue of how to integrate and place the micro filter mosaic chip to the targeting focal plane, i.e., the imaging sensor, of an off-shelf CMOS/CCD camera is not reported anywhere. This paper presents the methods and results of integrating such a miniaturized filter with off-shelf CMOS imaging sensors to produce handheld real-time multispectral imaging devices for the application of early stage pressure ulcer (ESPU) detection. Unlike conventional multispectral imaging devices which are bulky and expensive, the resulting handheld real-time multispectral ESPU detector can produce multiple images at different center wavelengths with a single shot, therefore eliminates the image registration procedure required by traditional multispectral imaging technologies.

  14. HPT: A High Spatial Resolution Multispectral Sensor for Microsatellite Remote Sensing

    PubMed Central

    Takahashi, Yukihiro; Sakamoto, Yuji; Kuwahara, Toshinori

    2018-01-01

    Although nano/microsatellites have great potential as remote sensing platforms, the spatial and spectral resolutions of an optical payload instrument are limited. In this study, a high spatial resolution multispectral sensor, the High-Precision Telescope (HPT), was developed for the RISING-2 microsatellite. The HPT has four image sensors: three in the visible region of the spectrum used for the composition of true color images, and a fourth in the near-infrared region, which employs liquid crystal tunable filter (LCTF) technology for wavelength scanning. Band-to-band image registration methods have also been developed for the HPT and implemented in the image processing procedure. The processed images were compared with other satellite images, and proven to be useful in various remote sensing applications. Thus, LCTF technology can be considered an innovative tool that is suitable for future multi/hyperspectral remote sensing by nano/microsatellites. PMID:29463022

  15. Determination of chromophore distribution in skin by spectral imaging

    NASA Astrophysics Data System (ADS)

    Saknite, Inga; Lange, Marta; Jakovels, Dainis; Spigulis, Janis

    2012-10-01

    Possibilities to determine chromophore distribution in skin by spectral imaging were explored. Simple RGB sensor devices were used for image acquisition. Totally 200 images of 40 different bruises of 20 people were obtained in order to map chromophores bilirubin and haemoglobin. Possibilities to detect water in vitro and in vivo were estimated by using silicon photodetectors and narrow band LEDs. The results show that it is possible to obtain bilirubin and haemoglobin distribution maps and observe changes of chromophore parameter values over time by using a simple RGB imaging device. Water in vitro was detected by using differences in absorption at 450 nm and 950 nm, and 650 nm and 950 nm.

  16. Detecting Unknown Artificial Urban Surface Materials Based on Spectral Dissimilarity Analysis.

    PubMed

    Jilge, Marianne; Heiden, Uta; Habermeyer, Martin; Mende, André; Juergens, Carsten

    2017-08-08

    High resolution imaging spectroscopy data have been recognised as a valuable data resource for augmenting detailed material inventories that serve as input for various urban applications. Image-specific urban spectral libraries are successfully used in urban imaging spectroscopy studies. However, the regional- and sensor-specific transferability of such libraries is limited due to the wide range of different surface materials. With the developed methodology, incomplete urban spectral libraries can be utilised by assuming that unknown surface material spectra are dissimilar to the known spectra in a basic spectral library (BSL). The similarity measure SID-SCA (Spectral Information Divergence-Spectral Correlation Angle) is applied to detect image-specific unknown urban surfaces while avoiding spectral mixtures. These detected unknown materials are categorised into distinct and identifiable material classes based on their spectral and spatial metrics. Experimental results demonstrate a successful redetection of material classes that had been previously erased in order to simulate an incomplete BSL. Additionally, completely new materials e.g., solar panels were identified in the data. It is further shown that the level of incompleteness of the BSL and the defined dissimilarity threshold are decisive for the detection of unknown material classes and the degree of spectral intra-class variability. A detailed accuracy assessment of the pre-classification results, aiming to separate natural and artificial materials, demonstrates spectral confusions between spectrally similar materials utilizing SID-SCA. However, most spectral confusions occur between natural or artificial materials which are not affecting the overall aim. The dissimilarity analysis overcomes the limitations of working with incomplete urban spectral libraries and enables the generation of image-specific training databases.

  17. Detecting Unknown Artificial Urban Surface Materials Based on Spectral Dissimilarity Analysis

    PubMed Central

    Jilge, Marianne; Heiden, Uta; Habermeyer, Martin; Mende, André; Juergens, Carsten

    2017-01-01

    High resolution imaging spectroscopy data have been recognised as a valuable data resource for augmenting detailed material inventories that serve as input for various urban applications. Image-specific urban spectral libraries are successfully used in urban imaging spectroscopy studies. However, the regional- and sensor-specific transferability of such libraries is limited due to the wide range of different surface materials. With the developed methodology, incomplete urban spectral libraries can be utilised by assuming that unknown surface material spectra are dissimilar to the known spectra in a basic spectral library (BSL). The similarity measure SID-SCA (Spectral Information Divergence-Spectral Correlation Angle) is applied to detect image-specific unknown urban surfaces while avoiding spectral mixtures. These detected unknown materials are categorised into distinct and identifiable material classes based on their spectral and spatial metrics. Experimental results demonstrate a successful redetection of material classes that had been previously erased in order to simulate an incomplete BSL. Additionally, completely new materials e.g., solar panels were identified in the data. It is further shown that the level of incompleteness of the BSL and the defined dissimilarity threshold are decisive for the detection of unknown material classes and the degree of spectral intra-class variability. A detailed accuracy assessment of the pre-classification results, aiming to separate natural and artificial materials, demonstrates spectral confusions between spectrally similar materials utilizing SID-SCA. However, most spectral confusions occur between natural or artificial materials which are not affecting the overall aim. The dissimilarity analysis overcomes the limitations of working with incomplete urban spectral libraries and enables the generation of image-specific training databases. PMID:28786947

  18. Development of Ultra-Low Noise, High Performance III-V Quantum Well Infrared Photodetectors (QWIPs) for Focal Plane Array Staring Image Sensor Systems

    DTIC Science & Technology

    1993-11-01

    Development of Ultra-Low Noise , High Performance III-V Quantum Well Infrared Photodetectors ( QWIPs )I for Focal Plane Array Staring Image Sensor Systems...experimental studies of dark current, photocurrent, noise fig- ures optical absorption, spectral responsivity and detectivity for different types of QWIPs ...the Boltzmann constant, and T is the temperature. S The noise in the QWIPs is mainly due to the random fluctuations of thermally excited carriers. The

  19. Image processing techniques and applications to the Earth Resources Technology Satellite program

    NASA Technical Reports Server (NTRS)

    Polge, R. J.; Bhagavan, B. K.; Callas, L.

    1973-01-01

    The Earth Resources Technology Satellite system is studied, with emphasis on sensors, data processing requirements, and image data compression using the Fast Fourier and Hadamard transforms. The ERTS-A system and the fundamentals of remote sensing are discussed. Three user applications (forestry, crops, and rangelands) are selected and their spectral signatures are described. It is shown that additional sensors are needed for rangeland management. An on-board information processing system is recommended to reduce the amount of data transmitted.

  20. Data Processing for the Space-Based Desis Hyperspectral Sensor

    NASA Astrophysics Data System (ADS)

    Carmona, E.; Avbelj, J.; Alonso, K.; Bachmann, M.; Cerra, D.; Eckardt, A.; Gerasch, B.; Graham, L.; Günther, B.; Heiden, U.; Kerr, G.; Knodt, U.; Krutz, D.; Krawcyk, H.; Makarau, A.; Miller, R.; Müller, R.; Perkins, R.; Walter, I.

    2017-05-01

    The German Aerospace Center (DLR) and Teledyne Brown Engineering (TBE) have established a collaboration to develop and operate a new space-based hyperspectral sensor, the DLR Earth Sensing Imaging Spectrometer (DESIS). DESIS will provide spacebased hyperspectral data in the VNIR with high spectral resolution and near-global coverage. While TBE provides the platform and infrastructure for operation of the DESIS instrument on the International Space Station, DLR is responsible for providing the instrument and the processing software. The DESIS instrument is equipped with novel characteristics for an imaging spectrometer such high spectral resolution (2.55 nm), a mirror pointing unit or a CMOS sensor operated in rolling shutter mode. We present here an overview of the DESIS instrument and its processing chain, emphasizing the effect of the novel characteristics of DESIS in the data processing and final data products. Furthermore, we analyse in more detail the effect of the rolling shutter on the DESIS data and possible mitigation/correction strategies.

  1. Portable Imagery Quality Assessment Test Field for Uav Sensors

    NASA Astrophysics Data System (ADS)

    Dąbrowski, R.; Jenerowicz, A.

    2015-08-01

    Nowadays the imagery data acquired from UAV sensors are the main source of all data used in various remote sensing applications, photogrammetry projects and in imagery intelligence (IMINT) as well as in other tasks as decision support. Therefore quality assessment of such imagery is an important task. The research team from Military University of Technology, Faculty of Civil Engineering and Geodesy, Geodesy Institute, Department of Remote Sensing and Photogrammetry has designed and prepared special test field- The Portable Imagery Quality Assessment Test Field (PIQuAT) that provides quality assessment in field conditions of images obtained with sensors mounted on UAVs. The PIQuAT consists of 6 individual segments, when combined allow for determine radiometric, spectral and spatial resolution of images acquired from UAVs. All segments of the PIQuAT can be used together in various configurations or independently. All elements of The Portable Imagery Quality Assessment Test Field were tested in laboratory conditions in terms of their radiometry and spectral reflectance characteristics.

  2. Hyperspectral Imager for the Coastal Ocean: instrument description and first images.

    PubMed

    Lucke, Robert L; Corson, Michael; McGlothlin, Norman R; Butcher, Steve D; Wood, Daniel L; Korwan, Daniel R; Li, Rong R; Snyder, Willliam A; Davis, Curt O; Chen, Davidson T

    2011-04-10

    The Hyperspectral Imager for the Coastal Ocean (HICO) is the first spaceborne hyperspectral sensor designed specifically for the coastal ocean and estuarial, riverine, or other shallow-water areas. The HICO generates hyperspectral images, primarily over the 400-900 nm spectral range, with a ground sample distance of ≈90 m (at nadir) and a high signal-to-noise ratio. The HICO is now operating on the International Space Station (ISS). Its cross-track and along-track fields of view are 42 km (at nadir) and 192 km, respectively, for a total scene area of 8000 km(2). The HICO is an innovative prototype sensor that builds on extensive experience with airborne sensors and makes extensive use of commercial off-the-shelf components to build a space sensor at a small fraction of the usual cost and time. Here we describe the instrument's design and characterization and present early images from the ISS. © 2011 Optical Society of America

  3. Information-Efficient Spectral Imaging Sensor With Tdi

    DOEpatents

    Rienstra, Jeffrey L.; Gentry, Stephen M.; Sweatt, William C.

    2004-01-13

    A programmable optical filter for use in multispectral and hyperspectral imaging employing variable gain time delay and integrate arrays. A telescope focuses an image of a scene onto at least one TDI array that is covered by a multispectral filter that passes separate bandwidths of light onto the rows in the TDI array. The variable gain feature of the TDI array allows individual rows of pixels to be attenuated individually. The attenuations are functions of the magnitudes of the positive and negative components of a spectral basis vector. The spectral basis vector is constructed so that its positive elements emphasize the presence of a target and its negative elements emphasize the presence of the constituents of the background of the imaged scene. This system provides for a very efficient determination of the presence of the target, as opposed to the very data intensive data manipulations that are required in conventional hyperspectral imaging systems.

  4. Prediction of soil properties using imaging spectroscopy: Considering fractional vegetation cover to improve accuracy

    NASA Astrophysics Data System (ADS)

    Franceschini, M. H. D.; Demattê, J. A. M.; da Silva Terra, F.; Vicente, L. E.; Bartholomeus, H.; de Souza Filho, C. R.

    2015-06-01

    Spectroscopic techniques have become attractive to assess soil properties because they are fast, require little labor and may reduce the amount of laboratory waste produced when compared to conventional methods. Imaging spectroscopy (IS) can have further advantages compared to laboratory or field proximal spectroscopic approaches such as providing spatially continuous information with a high density. However, the accuracy of IS derived predictions decreases when the spectral mixture of soil with other targets occurs. This paper evaluates the use of spectral data obtained by an airborne hyperspectral sensor (ProSpecTIR-VS - Aisa dual sensor) for prediction of physical and chemical properties of Brazilian highly weathered soils (i.e., Oxisols). A methodology to assess the soil spectral mixture is adapted and a progressive spectral dataset selection procedure, based on bare soil fractional cover, is proposed and tested. Satisfactory performances are obtained specially for the quantification of clay, sand and CEC using airborne sensor data (R2 of 0.77, 0.79 and 0.54; RPD of 2.14, 2.22 and 1.50, respectively), after spectral data selection is performed; although results obtained for laboratory data are more accurate (R2 of 0.92, 0.85 and 0.75; RPD of 3.52, 2.62 and 2.04, for clay, sand and CEC, respectively). Most importantly, predictions based on airborne-derived spectra for which the bare soil fractional cover is not taken into account show considerable lower accuracy, for example for clay, sand and CEC (RPD of 1.52, 1.64 and 1.16, respectively). Therefore, hyperspectral remotely sensed data can be used to predict topsoil properties of highly weathered soils, although spectral mixture of bare soil with vegetation must be considered in order to achieve an improved prediction accuracy.

  5. Monitoring on-orbit calibration stability of the Terra MODIS and Landsat 7 ETM+ sensors using pseudo-invariant test sites

    USGS Publications Warehouse

    Chander, G.; Xiong, X.(J.); Choi, T.(J.); Angal, A.

    2010-01-01

    The ability to detect and quantify changes in the Earth's environment depends on sensors that can provide calibrated, consistent measurements of the Earth's surface features through time. A critical step in this process is to put image data from different sensors onto a common radiometric scale. This work focuses on monitoring the long-term on-orbit calibration stability of the Terra Moderate Resolution Imaging Spectroradiometer (MODIS) and the Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) sensors using the Committee on Earth Observation Satellites (CEOS) reference standard pseudo-invariant test sites (Libya 4, Mauritania 1/2, Algeria 3, Libya 1, and Algeria 5). These sites have been frequently used as radiometric targets because of their relatively stable surface conditions temporally. This study was performed using all cloud-free calibrated images from the Terra MODIS and the L7 ETM+ sensors, acquired from launch to December 2008. Homogeneous regions of interest (ROI) were selected in the calibrated images and the mean target statistics were derived from sensor measurements in terms of top-of-atmosphere (TOA) reflectance. For each band pair, a set of fitted coefficients (slope and offset) is provided to monitor the long-term stability over very stable pseudo-invariant test sites. The average percent differences in intercept from the long-term trends obtained from the ETM + TOA reflectance estimates relative to the MODIS for all the CEOS reference standard test sites range from 2.5% to 15%. This gives an estimate of the collective differences due to the Relative Spectral Response (RSR) characteristics of each sensor, bi-directional reflectance distribution function (BRDF), spectral signature of the ground target, and atmospheric composition. The lifetime TOA reflectance trends from both sensors over 10 years are extremely stable, changing by no more than 0.4% per year in its TOA reflectance over the CEOS reference standard test sites.

  6. Comparison of the Spectral Properties of Pansharpened Images Generated from AVNIR-2 and Prism Onboard Alos

    NASA Astrophysics Data System (ADS)

    Matsuoka, M.

    2012-07-01

    A considerable number of methods for pansharpening remote-sensing images have been developed to generate higher spatial resolution multispectral images by the fusion of lower resolution multispectral images and higher resolution panchromatic images. Because pansharpening alters the spectral properties of multispectral images, method selection is one of the key factors influencing the accuracy of subsequent analyses such as land-cover classification or change detection. In this study, seven pixel-based pansharpening methods (additive wavelet intensity, additive wavelet principal component, generalized Laplacian pyramid with spectral distortion minimization, generalized intensity-hue-saturation (GIHS) transform, GIHS adaptive, Gram-Schmidt spectral sharpening, and block-based synthetic variable ratio) were compared using AVNIR-2 and PRISM onboard ALOS from the viewpoint of the preservation of spectral properties of AVNIR-2. A visual comparison was made between pansharpened images generated from spatially degraded AVNIR-2 and original images over urban, agricultural, and forest areas. The similarity of the images was evaluated in terms of the image contrast, the color distinction, and the brightness of the ground objects. In the quantitative assessment, three kinds of statistical indices, correlation coefficient, ERGAS, and Q index, were calculated by band and land-cover type. These scores were relatively superior in bands 2 and 3 compared with the other two bands, especially over urban and agricultural areas. Band 4 showed a strong dependency on the land-cover type. This was attributable to the differences in the observing spectral wavelengths of the sensors and local scene variances.

  7. Feature Transformation Detection Method with Best Spectral Band Selection Process for Hyper-spectral Imaging

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Wen; McGurr, Mike; Brickhouse, Mark

    2015-11-01

    We present a newly developed feature transformation (FT) detection method for hyper-spectral imagery (HSI) sensors. In essence, the FT method, by transforming the original features (spectral bands) to a different feature domain, may considerably increase the statistical separation between the target and background probability density functions, and thus may significantly improve the target detection and identification performance, as evidenced by the test results in this paper. We show that by differentiating the original spectral, one can completely separate targets from the background using a single spectral band, leading to perfect detection results. In addition, we have proposed an automated best spectral band selection process with a double-threshold scheme that can rank the available spectral bands from the best to the worst for target detection. Finally, we have also proposed an automated cross-spectrum fusion process to further improve the detection performance in lower spectral range (<1000 nm) by selecting the best spectral band pair with multivariate analysis. Promising detection performance has been achieved using a small background material signature library for concept-proving, and has then been further evaluated and verified using a real background HSI scene collected by a HYDICE sensor.

  8. Validation of Spectral Unmixing Results from Informed Non-Negative Matrix Factorization (INMF) of Hyperspectral Imagery

    NASA Astrophysics Data System (ADS)

    Wright, L.; Coddington, O.; Pilewskie, P.

    2017-12-01

    Hyperspectral instruments are a growing class of Earth observing sensors designed to improve remote sensing capabilities beyond discrete multi-band sensors by providing tens to hundreds of continuous spectral channels. Improved spectral resolution, range and radiometric accuracy allow the collection of large amounts of spectral data, facilitating thorough characterization of both atmospheric and surface properties. We describe the development of an Informed Non-Negative Matrix Factorization (INMF) spectral unmixing method to exploit this spectral information and separate atmospheric and surface signals based on their physical sources. INMF offers marked benefits over other commonly employed techniques including non-negativity, which avoids physically impossible results; and adaptability, which tailors the method to hyperspectral source separation. The INMF algorithm is adapted to separate contributions from physically distinct sources using constraints on spectral and spatial variability, and library spectra to improve the initial guess. Using this INMF algorithm we decompose hyperspectral imagery from the NASA Hyperspectral Imager for the Coastal Ocean (HICO), with a focus on separating surface and atmospheric signal contributions. HICO's coastal ocean focus provides a dataset with a wide range of atmospheric and surface conditions. These include atmospheres with varying aerosol optical thicknesses and cloud cover. HICO images also provide a range of surface conditions including deep ocean regions, with only minor contributions from the ocean surfaces; and more complex shallow coastal regions with contributions from the seafloor or suspended sediments. We provide extensive comparison of INMF decomposition results against independent measurements of physical properties. These include comparison against traditional model-based retrievals of water-leaving, aerosol, and molecular scattering radiances and other satellite products, such as aerosol optical thickness from the Moderate Resolution Imaging Spectroradiometer (MODIS).

  9. EO-1 analysis applicable to coastal characterization

    NASA Astrophysics Data System (ADS)

    Burke, Hsiao-hua K.; Misra, Bijoy; Hsu, Su May; Griffin, Michael K.; Upham, Carolyn; Farrar, Kris

    2003-09-01

    The EO-1 satellite is part of NASA's New Millennium Program (NMP). It consists of three imaging sensors: the multi-spectral Advanced Land Imager (ALI), Hyperion and Atmospheric Corrector. Hyperion provides a high-resolution hyperspectral imager capable of resolving 220 spectral bands (from 0.4 to 2.5 micron) with a 30 m resolution. The instrument images a 7.5 km by 100 km land area per image. Hyperion is currently the only space-borne HSI data source since the launch of EO-1 in late 2000. The discussion begins with the unique capability of hyperspectral sensing to coastal characterization: (1) most ocean feature algorithms are semi-empirical retrievals and HSI has all spectral bands to provide legacy with previous sensors and to explore new information, (2) coastal features are more complex than those of deep ocean that coupled effects are best resolved with HSI, and (3) with contiguous spectral coverage, atmospheric compensation can be done with more accuracy and confidence, especially since atmospheric aerosol effects are the most pronounced in the visible region where coastal feature lie. EO-1 data from Chesapeake Bay from 19 February 2002 are analyzed. In this presentation, it is first illustrated that hyperspectral data inherently provide more information for feature extraction than multispectral data despite Hyperion has lower SNR than ALI. Chlorophyll retrievals are also shown. The results compare favorably with data from other sources. The analysis illustrates the potential value of Hyperion (and HSI in general) data to coastal characterization. Future measurement requirements (air borne and space borne) are also discussed.

  10. Joint demosaicking and zooming using moderate spectral correlation and consistent edge map

    NASA Astrophysics Data System (ADS)

    Zhou, Dengwen; Dong, Weiming; Chen, Wengang

    2014-07-01

    The recently published joint demosaicking and zooming algorithms for single-sensor digital cameras all overfit the popular Kodak test images, which have been found to have higher spectral correlation than typical color images. Their performance perhaps significantly degrades on other datasets, such as the McMaster test images, which have weak spectral correlation. A new joint demosaicking and zooming algorithm is proposed for the Bayer color filter array (CFA) pattern, in which the edge direction information (edge map) extracted from the raw CFA data is consistently used in demosaicking and zooming. It also moderately utilizes the spectral correlation between color planes. The experimental results confirm that the proposed algorithm produces an excellent performance on both the Kodak and McMaster datasets in terms of both subjective and objective measures. Our algorithm also has high computational efficiency. It provides a better tradeoff among adaptability, performance, and computational cost compared to the existing algorithms.

  11. Rayleigh radiance computations for satellite remote sensing: accounting for the effect of sensor spectral response function.

    PubMed

    Wang, Menghua

    2016-05-30

    To understand and assess the effect of the sensor spectral response function (SRF) on the accuracy of the top of the atmosphere (TOA) Rayleigh-scattering radiance computation, new TOA Rayleigh radiance lookup tables (LUTs) over global oceans and inland waters have been generated. The new Rayleigh LUTs include spectral coverage of 335-2555 nm, all possible solar-sensor geometries, and surface wind speeds of 0-30 m/s. Using the new Rayleigh LUTs, the sensor SRF effect on the accuracy of the TOA Rayleigh radiance computation has been evaluated for spectral bands of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (SNPP) satellite and the Joint Polar Satellite System (JPSS)-1, showing some important uncertainties for VIIRS-SNPP particularly for large solar- and/or sensor-zenith angles as well as for large Rayleigh optical thicknesses (i.e., short wavelengths) and bands with broad spectral bandwidths. To accurately account for the sensor SRF effect, a new correction algorithm has been developed for VIIRS spectral bands, which improves the TOA Rayleigh radiance accuracy to ~0.01% even for the large solar-zenith angles of 70°-80°, compared with the error of ~0.7% without applying the correction for the VIIRS-SNPP 410 nm band. The same methodology that accounts for the sensor SRF effect on the Rayleigh radiance computation can be used for other satellite sensors. In addition, with the new Rayleigh LUTs, the effect of surface atmospheric pressure variation on the TOA Rayleigh radiance computation can be calculated precisely, and no specific atmospheric pressure correction algorithm is needed. There are some other important applications and advantages to using the new Rayleigh LUTs for satellite remote sensing, including an efficient and accurate TOA Rayleigh radiance computation for hyperspectral satellite remote sensing, detector-based TOA Rayleigh radiance computation, Rayleigh radiance calculations for high altitude lakes, and the same Rayleigh LUTs are applicable for all satellite sensors over the global ocean and inland waters. The new Rayleigh LUTs have been implemented in the VIIRS-SNPP ocean color data processing for routine production of global ocean color and inland water products.

  12. Infrared hyperspectral imaging sensor for gas detection

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele

    2000-11-01

    A small light weight man portable imaging spectrometer has many applications; gas leak detection, flare analysis, threat warning, chemical agent detection, just to name a few. With support from the US Air Force and Navy, Pacific Advanced Technology has developed a small man portable hyperspectral imaging sensor with an embedded DSP processor for real time processing that is capable of remotely imaging various targets such as gas plums, flames and camouflaged targets. Based upon their spectral signature the species and concentration of gases can be determined. This system has been field tested at numerous places including White Mountain, CA, Edwards AFB, and Vandenberg AFB. Recently evaluation of the system for gas detection has been performed. This paper presents these results. The system uses a conventional infrared camera fitted with a diffractive optic that images as well as disperses the incident radiation to form spectral images that are collected in band sequential mode. Because the diffractive optic performs both imaging and spectral filtering, the lens system consists of only a single element that is small, light weight and robust, thus allowing man portability. The number of spectral bands are programmable such that only those bands of interest need to be collected. The system is entirely passive, therefore, easily used in a covert operation. Currently Pacific Advanced Technology is working on the next generation of this camera system that will have both an embedded processor as well as an embedded digital signal processor in a small hand held camera configuration. This will allow the implementation of signal and image processing algorithms for gas detection and identification in real time. This paper presents field test data on gas detection and identification as well as discuss the signal and image processing used to enhance the gas visibility. Flow rates as low as 0.01 cubic feet per minute have been imaged with this system.

  13. Surface emissivity and temperature retrieval for a hyperspectral sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borel, C.C.

    1998-12-01

    With the growing use of hyper-spectral imagers, e.g., AVIRIS in the visible and short-wave infrared there is hope of using such instruments in the mid-wave and thermal IR (TIR) some day. The author believes that this will enable him to get around using the present temperature-emissivity separation algorithms using methods which take advantage of the many channels available in hyper-spectral imagers. A simple fact used in coming up with a novel algorithm is that a typical surface emissivity spectrum are rather smooth compared to spectral features introduced by the atmosphere. Thus, a iterative solution technique can be devised which retrievesmore » emissivity spectra based on spectral smoothness. To make the emissivities realistic, atmospheric parameters are varied using approximations, look-up tables derived from a radiative transfer code and spectral libraries. One such iterative algorithm solves the radiative transfer equation for the radiance at the sensor for the unknown emissivity and uses the blackbody temperature computed in an atmospheric window to get a guess for the unknown surface temperature. By varying the surface temperature over a small range a series of emissivity spectra are calculated. The one with the smoothest characteristic is chosen. The algorithm was tested on synthetic data using MODTRAN and the Salisbury emissivity database.« less

  14. CMOS Imaging Sensor Technology for Aerial Mapping Cameras

    NASA Astrophysics Data System (ADS)

    Neumann, Klaus; Welzenbach, Martin; Timm, Martin

    2016-06-01

    In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.

  15. BCB Bonding Technology of Back-Side Illuminated COMS Device

    NASA Astrophysics Data System (ADS)

    Wu, Y.; Jiang, G. Q.; Jia, S. X.; Shi, Y. M.

    2018-03-01

    Back-side illuminated CMOS(BSI) sensor is a key device in spaceborne hyperspectral imaging technology. Compared with traditional devices, the path of incident light is simplified and the spectral response is planarized by BSI sensors, which meets the requirements of quantitative hyperspectral imaging applications. Wafer bonding is the basic technology and key process of the fabrication of BSI sensors. 6 inch bonding of CMOS wafer and glass wafer was fabricated based on the low bonding temperature and high stability of BCB. The influence of different thickness of BCB on bonding strength was studied. Wafer bonding with high strength, high stability and no bubbles was fabricated by changing bonding conditions.

  16. Super-resolution reconstruction of hyperspectral images.

    PubMed

    Akgun, Toygar; Altunbasak, Yucel; Mersereau, Russell M

    2005-11-01

    Hyperspectral images are used for aerial and space imagery applications, including target detection, tracking, agricultural, and natural resource exploration. Unfortunately, atmospheric scattering, secondary illumination, changing viewing angles, and sensor noise degrade the quality of these images. Improving their resolution has a high payoff, but applying super-resolution techniques separately to every spectral band is problematic for two main reasons. First, the number of spectral bands can be in the hundreds, which increases the computational load excessively. Second, considering the bands separately does not make use of the information that is present across them. Furthermore, separate band super-resolution does not make use of the inherent low dimensionality of the spectral data, which can effectively be used to improve the robustness against noise. In this paper, we introduce a novel super-resolution method for hyperspectral images. An integral part of our work is to model the hyperspectral image acquisition process. We propose a model that enables us to represent the hyperspectral observations from different wavelengths as weighted linear combinations of a small number of basis image planes. Then, a method for applying super resolution to hyperspectral images using this model is presented. The method fuses information from multiple observations and spectral bands to improve spatial resolution and reconstruct the spectrum of the observed scene as a combination of a small number of spectral basis functions.

  17. Design and laboratory calibration of the compact pushbroom hyperspectral imaging system

    NASA Astrophysics Data System (ADS)

    Zhou, Jiankang; Ji, Yiqun; Chen, Yuheng; Chen, Xinhua; Shen, Weimin

    2009-11-01

    The designed hyperspectral imaging system is composed of three main parts, that is, optical subsystem, electronic subsystem and capturing subsystem. And a three-dimensional "image cube" can be obtained through push-broom. The fore-optics is commercial-off-the-shelf with high speed and three continuous zoom ratios. Since the dispersive imaging part is based on Offner relay configuration with an aberration-corrected convex grating, high power of light collection and variable view field are obtained. The holographic recording parameters of the convex grating are optimized, and the aberration of the Offner configuration dispersive system is balanced. The electronic system adopts module design, which can minimize size, mass, and power consumption. Frame transfer area-array CCD is chosen as the image sensor and the spectral line can be binned to achieve better SNR and sensitivity without any deterioration in spatial resolution. The capturing system based on the computer can set the capturing parameters, calibrate the spectrometer, process and display spectral imaging data. Laboratory calibrations are prerequisite for using precise spectral data. The spatial and spectral calibration minimize smile and keystone distortion caused by optical system, assembly and so on and fix positions of spatial and spectral line on the frame area-array CCD. Gases excitation lamp is used in smile calibration and the keystone calculation is carried out by different viewing field point source created by a series of narrow slit. The laboratory and field imaging results show that this pushbroom hyperspectral imaging system can acquire high quality spectral images.

  18. A forestry GIS-based study on evaluating the potential of imaging spectroscopy in mapping forest land fertility

    NASA Astrophysics Data System (ADS)

    Mõttus, Matti; Takala, Tuure

    2014-12-01

    Fertility, or the availability of nutrients and water, controls forest productivity. It affects its carbon sequestration, and thus the forest's effect on climate, as well as its commercial value. Although the availability of nutrients cannot be measured directly using remote sensing methods, fertility alters several vegetation traits detectable from the reflectance spectra of the forest stand, including its pigment content and water stress. However, forest reflectance is also influenced by other factors, such as species composition and stand age. Here, we present a case study demonstrating how data obtained using imaging spectroscopy is correlated with site fertility. The study was carried out in Hyytiälä, Finland, in the southern boreal forest zone. We used a database of state-owned forest stands including basic forestry variables and a site fertility index. To test the suitability of imaging spectroscopy with different spatial and spectral resolutions for site fertility mapping, we performed two airborne acquisitions using different sensor configurations. First, the sensor was flown at a high altitude with high spectral resolution resulting in a pixel size in the order of a tree crown. Next, the same area was flown to provide reflectance data with sub-meter spatial resolution. However, to maintain usable signal-to-noise ratios, several spectral channels inside the sensor were combined, thus reducing spectral resolution. We correlated a number of narrowband vegetation indices (describing canopy biochemical composition, structure, and photosynthetic activity) on site fertility. Overall, site fertility had a significant influence on the vegetation indices but the strength of the correlation depended on dominant species. We found that high spatial resolution data calculated from the spectra of sunlit parts of tree crowns had the strongest correlation with site fertility.

  19. A novel optical gating method for laser gated imaging

    NASA Astrophysics Data System (ADS)

    Ginat, Ran; Schneider, Ron; Zohar, Eyal; Nesher, Ofer

    2013-06-01

    For the past 15 years, Elbit Systems is developing time-resolved active laser-gated imaging (LGI) systems for various applications. Traditional LGI systems are based on high sensitive gated sensors, synchronized to pulsed laser sources. Elbit propriety multi-pulse per frame method, which is being implemented in LGI systems, improves significantly the imaging quality. A significant characteristic of the LGI is its ability to penetrate a disturbing media, such as rain, haze and some fog types. Current LGI systems are based on image intensifier (II) sensors, limiting the system in spectral response, image quality, reliability and cost. A novel propriety optical gating module was developed in Elbit, untying the dependency of LGI system on II. The optical gating module is not bounded to the radiance wavelength and positioned between the system optics and the sensor. This optical gating method supports the use of conventional solid state sensors. By selecting the appropriate solid state sensor, the new LGI systems can operate at any desired wavelength. In this paper we present the new gating method characteristics, performance and its advantages over the II gating method. The use of the gated imaging systems is described in a variety of applications, including results from latest field experiments.

  20. GOES-R Advanced Baseline Imager: spectral response functions and radiometric biases with the NPP Visible Infrared Imaging Radiometer Suite evaluated for desert calibration sites.

    PubMed

    Pearlman, Aaron; Pogorzala, David; Cao, Changyong

    2013-11-01

    The Advanced Baseline Imager (ABI), which will be launched in late 2015 on the National Oceanic and Atmospheric Administration's Geostationary Operational Environmental Satellite R-series satellite, will be evaluated in terms of its data quality postlaunch through comparisons with other satellite sensors such as the recently launched Visible Infrared Imaging Radiometer Suite (VIIRS) aboard the Suomi National Polar-orbiting Partnership satellite. The ABI has completed much of its prelaunch characterization and its developers have generated and released its channel spectral response functions (response versus wavelength). Using these responses and constraining a radiative transfer model with ground reflectance, aerosol, and water vapor measurements, we simulate observed top of atmosphere (TOA) reflectances for analogous visible and near infrared channels of the VIIRS and ABI sensors at the Sonoran Desert and White Sands National Monument sites and calculate the radiometric biases and their uncertainties. We also calculate sensor TOA reflectances using aircraft hyperspectral data from the Airborne Visible/Infrared Imaging Spectrometer to validate the uncertainties in several of the ABI and VIIRS channels and discuss the potential for validating the others. Once on-orbit, calibration scientists can use these biases to ensure ABI data quality and consistency to support the numerical weather prediction community and other data users. They can also use the results for ABI or VIIRS anomaly detection and resolution.

  1. A low cost thermal infrared hyperspectral imager for small satellites

    NASA Astrophysics Data System (ADS)

    Crites, S. T.; Lucey, P. G.; Wright, R.; Garbeil, H.; Horton, K. A.

    2011-06-01

    The traditional model for space-based earth observations involves long mission times, high cost, and long development time. Because of the significant time and monetary investment required, riskier instrument development missions or those with very specific scientific goals are unlikely to successfully obtain funding. However, a niche for earth observations exploiting new technologies in focused, short lifetime missions is opening with the growth of the small satellite market and launch opportunities for these satellites. These low-cost, short-lived missions provide an experimental platform for testing new sensor technologies that may transition to larger, more long-lived platforms. The low costs and short lifetimes also increase acceptable risk to sensors, enabling large decreases in cost using commercial off the shelf (COTS) parts and allowing early-career scientists and engineers to gain experience with these projects. We are building a low-cost long-wave infrared spectral sensor, funded by the NASA Experimental Project to Stimulate Competitive Research program (EPSCOR), to demonstrate the ways in which a university's scientific and instrument development programs can fit into this niche. The sensor is a low-mass, power efficient thermal hyperspectral imager with electronics contained in a pressure vessel to enable the use of COTS electronics, and will be compatible with small satellite platforms. The sensor, called Thermal Hyperspectral Imager (THI), is based on a Sagnac interferometer and uses an uncooled 320x256 microbolometer array. The sensor will collect calibrated radiance data at long-wave infrared (LWIR, 8-14 microns) wavelengths in 230-meter pixels with 20 wavenumber spectral resolution from a 400-km orbit.

  2. Imaging system design and image interpolation based on CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Li, Yu-feng; Liang, Fei; Guo, Rui

    2009-11-01

    An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.

  3. Advanced Image Processing for NASA Applications

    NASA Technical Reports Server (NTRS)

    LeMoign, Jacqueline

    2007-01-01

    The future of space exploration will involve cooperating fleets of spacecraft or sensor webs geared towards coordinated and optimal observation of Earth Science phenomena. The main advantage of such systems is to utilize multiple viewing angles as well as multiple spatial and spectral resolutions of sensors carried on multiple spacecraft but acting collaboratively as a single system. Within this framework, our research focuses on all areas related to sensing in collaborative environments, which means systems utilizing intracommunicating spatially distributed sensor pods or crafts being deployed to monitor or explore different environments. This talk will describe the general concept of sensing in collaborative environments, will give a brief overview of several technologies developed at NASA Goddard Space Flight Center in this area, and then will concentrate on specific image processing research related to that domain, specifically image registration and image fusion.

  4. Measurements of scene spectral radiance variability

    NASA Astrophysics Data System (ADS)

    Seeley, Juliette A.; Wack, Edward C.; Mooney, Daniel L.; Muldoon, Michael; Shey, Shen; Upham, Carolyn A.; Harvey, John M.; Czerwinski, Richard N.; Jordan, Michael P.; Vallières, Alexandre; Chamberland, Martin

    2006-05-01

    Detection performance of LWIR passive standoff chemical agent sensors is strongly influenced by various scene parameters, such as atmospheric conditions, temperature contrast, concentration-path length product (CL), agent absorption coefficient, and scene spectral variability. Although temperature contrast, CL, and agent absorption coefficient affect the detected signal in a predictable manner, fluctuations in background scene spectral radiance have less intuitive consequences. The spectral nature of the scene is not problematic in and of itself; instead it is spatial and temporal fluctuations in the scene spectral radiance that cannot be entirely corrected for with data processing. In addition, the consequence of such variability is a function of the spectral signature of the agent that is being detected and is thus different for each agent. To bracket the performance of background-limited (low sensor NEDN), passive standoff chemical sensors in the range of relevant conditions, assessment of real scene data is necessary1. Currently, such data is not widely available2. To begin to span the range of relevant scene conditions, we have acquired high fidelity scene spectral radiance measurements with a Telops FTIR imaging spectrometer 3. We have acquired data in a variety of indoor and outdoor locations at different times of day and year. Some locations include indoor office environments, airports, urban and suburban scenes, waterways, and forest. We report agent-dependent clutter measurements for three of these backgrounds.

  5. Seasonal discrimination of C3 and C4 grasses functional types: An evaluation of the prospects of varying spectral configurations of new generation sensors

    NASA Astrophysics Data System (ADS)

    Shoko, Cletah; Mutanga, Onisimo

    2017-10-01

    The present study assessed the potential of varying spectral configuration of Landsat 8 Operational Land Imager (OLI), Sentinel 2 MultiSpectal Instrument (MSI) and Worldview 2 sensors in the seasonal discrimination of Festuca costata (C3) and Themeda Triandra (C4) grass species in the Drakensberg, South Africa. This was achieved by resampling hyperspectral measurements to the spectral windows corresponding to the three sensors at two distinct seasonal periods (summer peak and end of winter), using the Discriminant Analysis (DA) classification ensemble. In summer, standard bands of the Worldview 2 produced the highest overall classification accuracy (98.61%), followed by Sentinel 2 (97.52%), whereas the Landsat 8 spectral configuration was the least performer, using vegetation indices (95.83%). In winter, Sentinel 2 spectral bands produced the highest accuracy (96.18%) for the two species, followed by Worldview 2 (94.44%) and Landsat 8 yielded the least (91.67%) accuracy. Results also showed that maximum separability between C3 and C4 grasses was in summer, while at the end of winter considerable overlaps were noted, especially when using the spectral settings of the Landsat 8 OLI and Sentinel 2 shortwave infrared bands. Test of significance in species reflectance further confirmed that in summer, there were significant differences (P < 0.05), whereas in winter, most of the spectral windows of all sensors yielded insignificant differences (P > 0.05) between the two species. In this regard, the peak summer period presents a promising opportunity for the spectral discrimination of C3 and C4 grass species functional types, than the end of winter, when using multispectral sensors. Results from this study highlight the influence of seasonality on discrimination and therefore provide the basis for the successful discrimination and mapping of C3 and C4 grass species.

  6. Can we match ultraviolet face images against their visible counterparts?

    NASA Astrophysics Data System (ADS)

    Narang, Neeru; Bourlai, Thirimachos; Hornak, Lawrence A.

    2015-05-01

    In law enforcement and security applications, the acquisition of face images is critical in producing key trace evidence for the successful identification of potential threats. However, face recognition (FR) for face images captured using different camera sensors, and under variable illumination conditions, and expressions is very challenging. In this paper, we investigate the advantages and limitations of the heterogeneous problem of matching ultra violet (from 100 nm to 400 nm in wavelength) or UV, face images against their visible (VIS) counterparts, when all face images are captured under controlled conditions. The contributions of our work are three-fold; (i) We used a camera sensor designed with the capability to acquire UV images at short-ranges, and generated a dual-band (VIS and UV) database that is composed of multiple, full frontal, face images of 50 subjects. Two sessions were collected that span over the period of 2 months. (ii) For each dataset, we determined which set of face image pre-processing algorithms are more suitable for face matching, and, finally, (iii) we determined which FR algorithm better matches cross-band face images, resulting in high rank-1 identification rates. Experimental results show that our cross spectral matching (the heterogeneous problem, where gallery and probe sets consist of face images acquired in different spectral bands) algorithms achieve sufficient identification performance. However, we also conclude that the problem under study, is very challenging, and it requires further investigation to address real-world law enforcement or military applications. To the best of our knowledge, this is first time in the open literature the problem of cross-spectral matching of UV against VIS band face images is being investigated.

  7. ASPECT (Airborne Spectral Photometric Environmental Collection Technology) Fact Sheet

    EPA Pesticide Factsheets

    This multi-sensor screening tool provides infrared and photographic images with geospatial, chemical, and radiological data within minutes to support emergency responses, home-land security missions, environmental surveys, and climate monitoring missions.

  8. Retinex Preprocessing for Improved Multi-Spectral Image Classification

    NASA Technical Reports Server (NTRS)

    Thompson, B.; Rahman, Z.; Park, S.

    2000-01-01

    The goal of multi-image classification is to identify and label "similar regions" within a scene. The ability to correctly classify a remotely sensed multi-image of a scene is affected by the ability of the classification process to adequately compensate for the effects of atmospheric variations and sensor anomalies. Better classification may be obtained if the multi-image is preprocessed before classification, so as to reduce the adverse effects of image formation. In this paper, we discuss the overall impact on multi-spectral image classification when the retinex image enhancement algorithm is used to preprocess multi-spectral images. The retinex is a multi-purpose image enhancement algorithm that performs dynamic range compression, reduces the dependence on lighting conditions, and generally enhances apparent spatial resolution. The retinex has been successfully applied to the enhancement of many different types of grayscale and color images. We show in this paper that retinex preprocessing improves the spatial structure of multi-spectral images and thus provides better within-class variations than would otherwise be obtained without the preprocessing. For a series of multi-spectral images obtained with diffuse and direct lighting, we show that without retinex preprocessing the class spectral signatures vary substantially with the lighting conditions. Whereas multi-dimensional clustering without preprocessing produced one-class homogeneous regions, the classification on the preprocessed images produced multi-class non-homogeneous regions. This lack of homogeneity is explained by the interaction between different agronomic treatments applied to the regions: the preprocessed images are closer to ground truth. The principle advantage that the retinex offers is that for different lighting conditions classifications derived from the retinex preprocessed images look remarkably "similar", and thus more consistent, whereas classifications derived from the original images, without preprocessing, are much less similar.

  9. Backside illuminated CMOS-TDI line scanner for space applications

    NASA Astrophysics Data System (ADS)

    Cohen, O.; Ben-Ari, N.; Nevo, I.; Shiloah, N.; Zohar, G.; Kahanov, E.; Brumer, M.; Gershon, G.; Ofer, O.

    2017-09-01

    A new multi-spectral line scanner CMOS image sensor is reported. The backside illuminated (BSI) image sensor was designed for continuous scanning Low Earth Orbit (LEO) space applications including A custom high quality CMOS Active Pixels, Time Delayed Integration (TDI) mechanism that increases the SNR, 2-phase exposure mechanism that increases the dynamic Modulation Transfer Function (MTF), very low power internal Analog to Digital Converters (ADC) with resolution of 12 bit per pixel and on chip controller. The sensor has 4 independent arrays of pixels where each array is arranged in 2600 TDI columns with controllable TDI depth from 8 up to 64 TDI levels. A multispectral optical filter with specific spectral response per array is assembled at the package level. In this paper we briefly describe the sensor design and present some electrical and electro-optical recent measurements of the first prototypes including high Quantum Efficiency (QE), high MTF, wide range selectable Full Well Capacity (FWC), excellent linearity of approximately 1.3% in a signal range of 5-85% and approximately 1.75% in a signal range of 2-95% out of the signal span, readout noise of approximately 95 electrons with 64 TDI levels, negligible dark current and power consumption of less than 1.5W total for 4 bands sensor at all operation conditions .

  10. Inter-Comparison of S-NPP VIIRS and Aqua MODIS Thermal Emissive Bands Using Hyperspectral Infrared Sounder Measurements as a Transfer Reference

    NASA Technical Reports Server (NTRS)

    Li, Yonghong; Wu, Aisheng; Xiong, Xiaoxiong

    2016-01-01

    This paper compares the calibration consistency of the spectrally-matched thermal emissive bands (TEB) between the Suomi National Polar-orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) and the Aqua Moderate Resolution Imaging Spectroradiometer (MODIS), using observations from their simultaneous nadir overpasses (SNO). Nearly-simultaneous hyperspectral measurements from the Aqua Atmospheric Infrared Sounder(AIRS) and the S-NPP Cross-track Infrared Sounder (CrIS) are used to account for existing spectral response differences between MODIS and VIIRS TEB. The comparison uses VIIRS Sensor Data Records (SDR) in MODIS five-minute granule format provided by the NASA Land Product and Evaluation and Test Element (PEATE) and Aqua MODIS Collection 6 Level 1 B (L1B) products. Each AIRS footprint of 13.5 km (or CrIS field of view of 14 km) is co-located with multiple MODIS (or VIIRS) pixels. The corresponding AIRS- and CrIS-simulated MODIS and VIIRS radiances are derived by convolutions based on sensor-dependent relative spectral response (RSR) functions. The VIIRS and MODIS TEB calibration consistency is evaluated and the two sensors agreed within 0.2 K in brightness temperature.Additional factors affecting the comparison such as geolocation and atmospheric water vapor content are also discussed in this paper.

  11. Single-exposure quantitative phase imaging in color-coded LED microscopy.

    PubMed

    Lee, Wonchan; Jung, Daeseong; Ryu, Suho; Joo, Chulmin

    2017-04-03

    We demonstrate single-shot quantitative phase imaging (QPI) in a platform of color-coded LED microscopy (cLEDscope). The light source in a conventional microscope is replaced by a circular LED pattern that is trisected into subregions with equal area, assigned to red, green, and blue colors. Image acquisition with a color image sensor and subsequent computation based on weak object transfer functions allow for the QPI of a transparent specimen. We also provide a correction method for color-leakage, which may be encountered in implementing our method with consumer-grade LEDs and image sensors. Most commercially available LEDs and image sensors do not provide spectrally isolated emissions and pixel responses, generating significant error in phase estimation in our method. We describe the correction scheme for this color-leakage issue, and demonstrate improved phase measurement accuracy. The computational model and single-exposure QPI capability of our method are presented by showing images of calibrated phase samples and cellular specimens.

  12. Evaluation on Radiometric Capability of Chinese Optical Satellite Sensors

    PubMed Central

    Yang, Aixia; Zhong, Bo; Wu, Shanlong; Liu, Qinhuo

    2017-01-01

    The radiometric capability of on-orbit sensors should be updated on time due to changes induced by space environmental factors and instrument aging. Some sensors, such as Moderate Resolution Imaging Spectroradiometer (MODIS), have onboard calibrators, which enable real-time calibration. However, most Chinese remote sensing satellite sensors lack onboard calibrators. Their radiometric calibrations have been updated once a year based on a vicarious calibration procedure, which has affected the applications of the data. Therefore, a full evaluation of the sensors’ radiometric capabilities is essential before quantitative applications can be made. In this study, a comprehensive procedure for evaluating the radiometric capability of several Chinese optical satellite sensors is proposed. In this procedure, long-term radiometric stability and radiometric accuracy are the two major indicators for radiometric evaluation. The radiometric temporal stability is analyzed by the tendency of long-term top-of-atmosphere (TOA) reflectance variation; the radiometric accuracy is determined by comparison with the TOA reflectance from MODIS after spectrally matching. Three Chinese sensors including the Charge-Coupled Device (CCD) camera onboard Huan Jing 1 satellite (HJ-1), as well as the Visible and Infrared Radiometer (VIRR) and Medium-Resolution Spectral Imager (MERSI) onboard the Feng Yun 3 satellite (FY-3) are evaluated in reflective bands based on this procedure. The results are reasonable, and thus can provide reliable reference for the sensors’ application, and as such will promote the development of Chinese satellite data. PMID:28117745

  13. Hyperspectral monitoring of chemically sensitive plant sentinels

    NASA Astrophysics Data System (ADS)

    Simmons, Danielle A.; Kerekes, John P.; Raqueno, Nina G.

    2009-08-01

    Automated detection of chemical threats is essential for an early warning of a potential attack. Harnessing plants as bio-sensors allows for distributed sensing without a power supply. Monitoring the bio-sensors requires a specifically tailored hyperspectral system. Tobacco plants have been genetically engineered to de-green when a material of interest (e.g. zinc, TNT) is introduced to their immediate vicinity. The reflectance spectra of the bio-sensors must be accurately characterized during the de-greening process for them to play a role in an effective warning system. Hyperspectral data have been collected under laboratory conditions to determine the key regions in the reflectance spectra associated with the degreening phenomenon. Bio-sensor plants and control (nongenetically engineered) plants were exposed to TNT over the course of two days and their spectra were measured every six hours. Rochester Institute of Technologys Digital Imaging and Remote Sensing Image Generation Model (DIRSIG) was used to simulate detection of de-greened plants in the field. The simulated scene contains a brick school building, sidewalks, trees and the bio-sensors placed at the entrances to the buildings. Trade studies of the bio-sensor monitoring system were also conducted using DIRSIG simulations. System performance was studied as a function of field of view, pixel size, illumination conditions, radiometric noise, spectral waveband dependence and spectral resolution. Preliminary results show that the most significant change in reflectance during the degreening period occurs in the near infrared region.

  14. Gyrocopter-Based Remote Sensing Platform

    NASA Astrophysics Data System (ADS)

    Weber, I.; Jenal, A.; Kneer, C.; Bongartz, J.

    2015-04-01

    In this paper the development of a lightweight and highly modularized airborne sensor platform for remote sensing applications utilizing a gyrocopter as a carrier platform is described. The current sensor configuration consists of a high resolution DSLR camera for VIS-RGB recordings. As a second sensor modality, a snapshot hyperspectral camera was integrated in the aircraft. Moreover a custom-developed thermal imaging system composed of a VIS-PAN camera and a LWIR-camera is used for aerial recordings in the thermal infrared range. Furthermore another custom-developed highly flexible imaging system for high resolution multispectral image acquisition with up to six spectral bands in the VIS-NIR range is presented. The performance of the overall system was tested during several flights with all sensor modalities and the precalculated demands with respect to spatial resolution and reliability were validated. The collected data sets were georeferenced, georectified, orthorectified and then stitched to mosaics.

  15. [A Method to Reconstruct Surface Reflectance Spectrum from Multispectral Image Based on Canopy Radiation Transfer Model].

    PubMed

    Zhao, Yong-guang; Ma, Ling-ling; Li, Chuan-rong; Zhu, Xiao-hua; Tang, Ling-li

    2015-07-01

    Due to the lack of enough spectral bands for multi-spectral sensor, it is difficult to reconstruct surface retlectance spectrum from finite spectral information acquired by multi-spectral instrument. Here, taking into full account of the heterogeneity of pixel from remote sensing image, a method is proposed to simulate hyperspectral data from multispectral data based on canopy radiation transfer model. This method first assumes the mixed pixels contain two types of land cover, i.e., vegetation and soil. The sensitive parameters of Soil-Leaf-Canopy (SLC) model and a soil ratio factor were retrieved from multi-spectral data based on Look-Up Table (LUT) technology. Then, by combined with a soil ratio factor, all the parameters were input into the SLC model to simulate the surface reflectance spectrum from 400 to 2 400 nm. Taking Landsat Enhanced Thematic Mapper Plus (ETM+) image as reference image, the surface reflectance spectrum was simulated. The simulated reflectance spectrum revealed different feature information of different surface types. To test the performance of this method, the simulated reflectance spectrum was convolved with the Landsat ETM + spectral response curves and Moderate Resolution Imaging Spectrometer (MODIS) spectral response curves to obtain the simulated Landsat ETM+ and MODIS image. Finally, the simulated Landsat ETM+ and MODIS images were compared with the observed Landsat ETM+ and MODIS images. The results generally showed high correction coefficients (Landsat: 0.90-0.99, MODIS: 0.74-0.85) between most simulated bands and observed bands and indicated that the simulated reflectance spectrum was well simulated and reliable.

  16. Spectral Characterization of Suspected Acid Deposition Damage in Red Spruce (picea Rubens) Stands from Vermont

    NASA Technical Reports Server (NTRS)

    Vogelmann, J. E.; Rock, B. N.

    1985-01-01

    In an attempt to demonstrate the utility of remote sensing systems to monitor sites of suspected acid rain deposition damage, intensive field activities, coupled with aircraft overflights, were centered on red spruce stands in Vermont during August and September of 1984. Remote sensing data were acquired using the Airborne Imaging Spectrometer, Thematic Mapper Simulator, Barnes Model 12 to 1000 Modular Multiband Radiometer and Spectron Engineering Spectrometer (the former two flown on the NASA C-130; the latter two on A Bell UH-1B Iroquois Helicopter). Field spectral data were acquired during the week of the August overflights using a high spectral resolution spectrometer and two broad-band radiometers. Preliminary analyses of these data indicate a number of spectral differences in vegetation between high and low damage sites. Some of these differences are subtle, and are observable only with high spectral resolution sensors; others are less subtle and are observable using broad-band sensors.

  17. Dual-mode photosensitive arrays based on the integration of liquid crystal microlenses and CMOS sensors for obtaining the intensity images and wavefronts of objects.

    PubMed

    Tong, Qing; Lei, Yu; Xin, Zhaowei; Zhang, Xinyu; Sang, Hongshi; Xie, Changsheng

    2016-02-08

    In this paper, we present a kind of dual-mode photosensitive arrays (DMPAs) constructed by hybrid integration a liquid crystal microlens array (LCMLA) driven electrically and a CMOS sensor array, which can be used to measure both the conventional intensity images and corresponding wavefronts of objects. We utilize liquid crystal materials to shape the microlens array with the electrically tunable focal length. Through switching the voltage signal on and off, the wavefronts and the intensity images can be acquired through the DMPAs, sequentially. We use white light to obtain the object's wavefronts for avoiding losing important wavefront information. We separate the white light wavefronts with a large number of spectral components and then experimentally compare them with single spectral wavefronts of typical red, green and blue lasers, respectively. Then we mix the red, green and blue wavefronts to a composite wavefront containing more optical information of the object.

  18. Active-passive data fusion algorithms for seafloor imaging and classification from CZMIL data

    NASA Astrophysics Data System (ADS)

    Park, Joong Yong; Ramnath, Vinod; Feygels, Viktor; Kim, Minsu; Mathur, Abhinav; Aitken, Jennifer; Tuell, Grady

    2010-04-01

    CZMIL will simultaneously acquire lidar and passive spectral data. These data will be fused to produce enhanced seafloor reflectance images from each sensor, and combined at a higher level to achieve seafloor classification. In the DPS software, the lidar data will first be processed to solve for depth, attenuation, and reflectance. The depth measurements will then be used to constrain the spectral optimization of the passive spectral data, and the resulting water column estimates will be used recursively to improve the estimates of seafloor reflectance from the lidar. Finally, the resulting seafloor reflectance cube will be combined with texture metrics estimated from the seafloor topography to produce classifications of the seafloor.

  19. Spectral X-Ray Diffraction using a 6 Megapixel Photon Counting Array Detector.

    PubMed

    Muir, Ryan D; Pogranichniy, Nicholas R; Muir, J Lewis; Sullivan, Shane Z; Battaile, Kevin P; Mulichak, Anne M; Toth, Scott J; Keefe, Lisa J; Simpson, Garth J

    2015-03-12

    Pixel-array array detectors allow single-photon counting to be performed on a massively parallel scale, with several million counting circuits and detectors in the array. Because the number of photoelectrons produced at the detector surface depends on the photon energy, these detectors offer the possibility of spectral imaging. In this work, a statistical model of the instrument response is used to calibrate the detector on a per-pixel basis. In turn, the calibrated sensor was used to perform separation of dual-energy diffraction measurements into two monochromatic images. Targeting applications include multi-wavelength diffraction to aid in protein structure determination and X-ray diffraction imaging.

  20. Spectral x-ray diffraction using a 6 megapixel photon counting array detector

    NASA Astrophysics Data System (ADS)

    Muir, Ryan D.; Pogranichniy, Nicholas R.; Muir, J. Lewis; Sullivan, Shane Z.; Battaile, Kevin P.; Mulichak, Anne M.; Toth, Scott J.; Keefe, Lisa J.; Simpson, Garth J.

    2015-03-01

    Pixel-array array detectors allow single-photon counting to be performed on a massively parallel scale, with several million counting circuits and detectors in the array. Because the number of photoelectrons produced at the detector surface depends on the photon energy, these detectors offer the possibility of spectral imaging. In this work, a statistical model of the instrument response is used to calibrate the detector on a per-pixel basis. In turn, the calibrated sensor was used to perform separation of dual-energy diffraction measurements into two monochromatic images. Targeting applications include multi-wavelength diffraction to aid in protein structure determination and X-ray diffraction imaging.

  1. High density Schottky barrier IRCCD sensors for SWIR applications at intermediate temperature

    NASA Technical Reports Server (NTRS)

    Elabd, H.; Villani, T. S.; Tower, J. R.

    1982-01-01

    Monolithic 32 x 64 and 64 x 1:128 palladium silicide (Pd2Si) interline transfer infrared charge coupled devices (IRCCDs) sensitive in the 1 to 3.5 micron spectral band were developed. This silicon imager exhibits a low response nonuniformity of typically 0.2 to 1.6% rms, and was operated in the temperature range between 40 to 140 K. Spectral response measurements of test Pd2Si p-type Si devices yield quantum efficiencies of 7.9% at 1.25 microns, 5.6% at 1.65 microns 2.2% at 2.22 microns. Improvement in quantum efficiency is expected by optimizing the different structural parameters of the Pd2Si detectors. The spectral response of the Pd2Si detectors fit a modified Fowler emission model. The measured photo-electric barrier height for the Pd2Si detectors is 0.34 eV and the measured quantum efficiency coefficient, C1, is 19%/eV. The dark current level of Pd2Si Schottky barrier focal plane arrays (FPAs) is sufficiently low to enable operation at intermediate temperatures at TV frame rates. Typical dark current level measured at 120 K on the FPA is 2 nA/sq cm. The operating temperature of the Pd2Si FPA is compatible with passive cooler performance. In addition, high density Pd2Si Schottky barrier FPAs are manufactured with high yield and therefore represent an economical approach to short wavelength IR imaging. A Pd2Si Schottky barrier image sensor for push-broom multispectral imaging in the 1.25, 1.65, and 2.22 micron bands is being studied. The sensor will have two line arrays (dual band capability) of 512 detectors each, with 30 micron center-to-center detector spacing. The device will be suitable for chip-to-chip abutment, thus providing the capability to produce large, multiple chip focal planes with contiguous, in-line sensors.

  2. Evaluating an image-fusion algorithm with synthetic-image-generation tools

    NASA Astrophysics Data System (ADS)

    Gross, Harry N.; Schott, John R.

    1996-06-01

    An algorithm that combines spectral mixing and nonlinear optimization is used to fuse multiresolution images. Image fusion merges images of different spatial and spectral resolutions to create a high spatial resolution multispectral combination. High spectral resolution allows identification of materials in the scene, while high spatial resolution locates those materials. In this algorithm, conventional spectral mixing estimates the percentage of each material (called endmembers) within each low resolution pixel. Three spectral mixing models are compared; unconstrained, partially constrained, and fully constrained. In the partially constrained application, the endmember fractions are required to sum to one. In the fully constrained application, all fractions are additionally required to lie between zero and one. While negative fractions seem inappropriate, they can arise from random spectral realizations of the materials. In the second part of the algorithm, the low resolution fractions are used as inputs to a constrained nonlinear optimization that calculates the endmember fractions for the high resolution pixels. The constraints mirror the low resolution constraints and maintain consistency with the low resolution fraction results. The algorithm can use one or more higher resolution sharpening images to locate the endmembers to high spatial accuracy. The algorithm was evaluated with synthetic image generation (SIG) tools. A SIG developed image can be used to control the various error sources that are likely to impair the algorithm performance. These error sources include atmospheric effects, mismodeled spectral endmembers, and variability in topography and illumination. By controlling the introduction of these errors, the robustness of the algorithm can be studied and improved upon. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution, fusing them with high resolution sharpening images will produce a higher spatial resolution land cover or material map.

  3. Empirical test of the spectral invariants theory using imaging spectroscopy data from a coniferous forest

    NASA Astrophysics Data System (ADS)

    Lukeš, Petr; Rautiainen, Miina; Stenberg, Pauline; Malenovský, Zbyněk

    2011-08-01

    The spectral invariants theory presents an alternative approach for modeling canopy scattering in remote sensing applications. The theory is particularly appealing in the case of coniferous forests, which typically display grouped structures and require computationally intensive calculation to account for the geometric arrangement of their canopies. However, the validity of the spectral invariants theory should be tested with empirical data sets from different vegetation types. In this paper, we evaluate a method to retrieve two canopy spectral invariants, the recollision probability and the escape factor, for a coniferous forest using imaging spectroscopy data from multiangular CHRIS PROBA and NADIR-view AISA Eagle sensors. Our results indicated that in coniferous canopies the spectral invariants theory performs well in the near infrared spectral range. In the visible range, on the other hand, the spectral invariants theory may not be useful. Secondly, our study suggested that retrieval of the escape factor could be used as a new method to describe the BRDF of a canopy.

  4. Physical Interpretation of the Correlation Between Multi-Angle Spectral Data and Canopy Height

    NASA Technical Reports Server (NTRS)

    Schull, M. A.; Ganguly, S.; Samanta, A.; Huang, D.; Shabanov, N. V.; Jenkins, J. P.; Chiu, J. C.; Marshak, A.; Blair, J. B.; Myneni, R. B.; hide

    2007-01-01

    Recent empirical studies have shown that multi-angle spectral data can be useful for predicting canopy height, but the physical reason for this correlation was not understood. We follow the concept of canopy spectral invariants, specifically escape probability, to gain insight into the observed correlation. Airborne Multi-Angle Imaging Spectrometer (AirMISR) and airborne Laser Vegetation Imaging Sensor (LVIS) data acquired during a NASA Terrestrial Ecology Program aircraft campaign underlie our analysis. Two multivariate linear regression models were developed to estimate LVIS height measures from 28 AirMISR multi-angle spectral reflectances and from the spectrally invariant escape probability at 7 AirMISR view angles. Both models achieved nearly the same accuracy, suggesting that canopy spectral invariant theory can explain the observed correlation. We hypothesize that the escape probability is sensitive to the aspect ratio (crown diameter to crown height). The multi-angle spectral data alone therefore may not provide enough information to retrieve canopy height globally

  5. Design of the OMPS limb sensor correction algorithm

    NASA Astrophysics Data System (ADS)

    Jaross, Glen; McPeters, Richard; Seftor, Colin; Kowitt, Mark

    The Sensor Data Records (SDR) for the Ozone Mapping and Profiler Suite (OMPS) on NPOESS (National Polar-orbiting Operational Environmental Satellite System) contains geolocated and calibrated radiances, and are similar to the Level 1 data of NASA Earth Observing System and other programs. The SDR algorithms (one for each of the 3 OMPS focal planes) are the processes by which the Raw Data Records (RDR) from the OMPS sensors are converted into the records that contain all data necessary for ozone retrievals. Consequently, the algorithms must correct and calibrate Earth signals, geolocate the data, and identify and ingest collocated ancillary data. As with other limb sensors, ozone profile retrievals are relatively insensitive to calibration errors due to the use of altitude normalization and wavelength pairing. But the profile retrievals as they pertain to OMPS are not immune from sensor changes. In particular, the OMPS Limb sensor images an altitude range of > 100 km and a spectral range of 290-1000 nm on its detector. Uncorrected sensor degradation and spectral registration drifts can lead to changes in the measured radiance profile, which in turn affects the ozone trend measurement. Since OMPS is intended for long-term monitoring, sensor calibration is a specific concern. The calibration is maintained via the ground data processing. This means that all sensor calibration data, including direct solar measurements, are brought down in the raw data and processed separately by the SDR algorithms. One of the sensor corrections performed by the algorithm is the correction for stray light. The imaging spectrometer and the unique focal plane design of OMPS makes these corrections particularly challenging and important. Following an overview of the algorithm flow, we will briefly describe the sensor stray light characterization and the correction approach used in the code.

  6. Determination of Primary Spectral Bands for Remote Sensing of Aquatic Environments

    PubMed Central

    Lee, ZhongPing; Carder, Kendall; Arnone, Robert; He, MingXia

    2007-01-01

    About 30 years ago, NASA launched the first ocean-color observing satellite: the Coastal Zone Color Scanner. CZCS had 5 bands in the visible-infrared domain with an objective to detect changes of phytoplankton (measured by concentration of chlorophyll) in the oceans. Twenty years later, for the same objective but with advanced technology, the Sea-viewing Wide Field-of-view Sensor (SeaWiFS, 7 bands), the Moderate-Resolution Imaging Spectrometer (MODIS, 8 bands), and the Medium Resolution Imaging Spectrometer (MERIS, 12 bands) were launched. The selection of the number of bands and their positions was based on experimental and theoretical results achieved before the design of these satellite sensors. Recently, Lee and Carder (2002) demonstrated that for adequate derivation of major properties (phytoplankton biomass, colored dissolved organic matter, suspended sediments, and bottom properties) in both oceanic and coastal environments from observation of water color, it is better for a sensor to have ∼15 bands in the 400 – 800 nm range. In that study, however, it did not provide detailed analyses regarding the spectral locations of the 15 bands. Here, from nearly 400 hyperspectral (∼ 3-nm resolution) measurements of remote-sensing reflectance (a measure of water color) taken in both coastal and oceanic waters covering both optically deep and optically shallow waters, first- and second-order derivatives were calculated after interpolating the measurements to 1-nm resolution. From these derivatives, the frequency of zero values for each wavelength was accounted for, and the distribution spectrum of such frequencies was obtained. Furthermore, the wavelengths that have the highest appearance of zeros were identified. Because these spectral locations indicate extrema (a local maximum or minimum) of the reflectance spectrum or inflections of the spectral curvature, placing the bands of a sensor at these wavelengths maximizes the potential of capturing (and then restoring) the spectral curve, and thus maximizes the potential of accurately deriving properties of the water column and/or bottom of various aquatic environments with a multi-band sensor. PMID:28903303

  7. Dim target detection method based on salient graph fusion

    NASA Astrophysics Data System (ADS)

    Hu, Ruo-lan; Shen, Yi-yan; Jiang, Jun

    2018-02-01

    Dim target detection is one key problem in digital image processing field. With development of multi-spectrum imaging sensor, it becomes a trend to improve the performance of dim target detection by fusing the information from different spectral images. In this paper, one dim target detection method based on salient graph fusion was proposed. In the method, Gabor filter with multi-direction and contrast filter with multi-scale were combined to construct salient graph from digital image. And then, the maximum salience fusion strategy was designed to fuse the salient graph from different spectral images. Top-hat filter was used to detect dim target from the fusion salient graph. Experimental results show that proposal method improved the probability of target detection and reduced the probability of false alarm on clutter background images.

  8. EARTHS (Earth Albedo Radiometer for Temporal Hemispheric Sensing)

    NASA Astrophysics Data System (ADS)

    Ackleson, S. G.; Bowles, J. H.; Mouroulis, P.; Philpot, W. D.

    2018-02-01

    We propose a concept for measuring the hemispherical Earth albedo in high temporal and spectral resolution using a hyperspectral imaging sensor deployed on a lunar satellite, such as the proposed NASA Deep Space Gateway.

  9. Cross-calibration of MODIS with ETM+ and ALI sensors for long-term monitoring of land surface processes

    USGS Publications Warehouse

    Meyer, D.; Chander, G.

    2006-01-01

    Increasingly, data from multiple sensors are used to gain a more complete understanding of land surface processes at a variety of scales. Although higher-level products (e.g., vegetation cover, albedo, surface temperature) derived from different sensors can be validated independently, the degree to which these sensors and their products can be compared to one another is vastly improved if their relative spectroradiometric responses are known. Most often, sensors are directly calibrated to diffuse solar irradiation or vicariously to ground targets. However, space-based targets are not traceable to metrological standards, and vicarious calibrations are expensive and provide a poor sampling of a sensor's full dynamic range. Crosscalibration of two sensors can augment these methods if certain conditions can be met: (1) the spectral responses are similar, (2) the observations are reasonably concurrent (similar atmospheric & solar illumination conditions), (3) errors due to misregistrations of inhomogeneous surfaces can be minimized (including scale differences), and (4) the viewing geometry is similar (or, some reasonable knowledge of surface bi-directional reflectance distribution functions is available). This study explores the impacts of cross-calibrating sensors when such conditions are met to some degree but not perfectly. In order to constrain the range of conditions at some level, the analysis is limited to sensors where cross-calibration studies have been conducted (Enhanced Thematic Mapper Plus (ETM+) on Landsat-7 (L7), Advance Land Imager (ALI) and Hyperion on Earth Observer-1 (EO-1)) and including systems having somewhat dissimilar geometry, spatial resolution & spectral response characteristics but are still part of the so-called "A.M. constellation" (Moderate Resolution Imaging Spectrometer (MODIS) aboard the Terra platform). Measures for spectral response differences and methods for cross calibrating such sensors are provided in this study. These instruments are cross calibrated using the Railroad Valley playa in Nevada. Best fit linear coefficients (slope and offset) are provided for ALI-to-MODIS and ETM+-to-MODIS cross calibrations, and root-mean-squared errors (RMSEs) and correlation coefficients are provided to quantify the uncertainty in these relationships. In theory, the linear fits and uncertainties can be used to compare radiance and reflectance products derived from each instrument.

  10. An oil film information retrieval method overcoming the influence of sun glitter, based on AISA+ airborne hyper-spectral image

    NASA Astrophysics Data System (ADS)

    Zhan, Yuanzeng; Mao, Tianming; Gong, Fang; Wang, Difeng; Chen, Jianyu

    2010-10-01

    As an effective survey tool for oil spill detection, the airborne hyper-spectral sensor affords the potentiality for retrieving the quantitative information of oil slick which is useful for the cleanup of spilled oil. But many airborne hyper-spectral images are affected by sun glitter which distorts radiance values and spectral ratios used for oil slick detection. In 2005, there's an oil spill event leaking at oil drilling platform in The South China Sea, and an AISA+ airborne hyper-spectral image recorded this event will be selected for studying in this paper, which is affected by sun glitter terribly. Through a spectrum analysis of the oil and water samples, two features -- "spectral rotation" and "a pair of fixed points" can be found in spectral curves between crude oil film and water. Base on these features, an oil film information retrieval method which can overcome the influence of sun glitter is presented. Firstly, the radiance of the image is converted to normal apparent reflectance (NormAR). Then, based on the features of "spectral rotation" (used for distinguishing oil film and water) and "a pair of fixed points" (used for overcoming the effect of sun glitter), NormAR894/NormAR516 is selected as an indicator of oil film. Finally, by using a threshold combined with the technologies of image filter and mathematic morphology, the distribution and relative thickness of oil film are retrieved.

  11. Assessing the capabilities of hyperspectral remote sensing to map oil films on waters

    NASA Astrophysics Data System (ADS)

    Liu, Bingxin; Li, Ying; Zhu, Xueyuan

    2014-11-01

    The harm of oil spills has caused extensive public concern. Remote sensing technology has become one of the most effective means of monitoring oil spill. However, how to evaluate the information extraction capabilities of various sensors and choose the most effective one has become an important issue. The current evaluation of sensors to detect oil films was mainly using in-situ measured spectra as a reference to determine the favorable band, but ignoring the effects of environmental noise and spectral response function. To understand the precision and accuracy of environment variables acquired from remote sensing, it is important to evaluate the target detection sensitivity of the entire sensor-air-target system corresponding to the change of reflectivity. The measurement data associated with the evaluation is environmental noise equivalent reflectance difference (NEΔRE ), which depends on the instrument signal to noise ratio(SNR) and other image data noise (such as atmospheric variables, scattered sky light scattering and direct sunlight, etc.). Hyperion remote sensing data is taken as an example for evaluation of its oil spill detection capabilities with the prerequisite that the impact of the spatial resolution is ignored. In order to evaluate the sensor's sensitivity of the film of water, the reflectance spectral data of light diesel and crude oil film were used. To obtain Hyperion reflectance data, we used FLAASH to do the atmospheric correction. The spectral response functions of Hyperion sensor was used for filtering the measured reflectance of the oil films to the theoretic spectral response. Then, these spectral response spectra were normalized to NEΔRE, according to which, the sensitivity of the sensor in oil film detecting could be evaluated. For crude oil, the range for Hyperion sensor to identify the film is within the wavelength from 518nm to 610nm (Band 17 to Band 26 of Hyperion sensors), within which the thin film and thick film can also be distinguished. For light diesel oil film, the range for Hyperion sensor to identify the film is within the wavelength from 468nm to 752nm (Band 12 to Band 40 of Hyperion sensors).

  12. Commercial CMOS image sensors as X-ray imagers and particle beam monitors

    NASA Astrophysics Data System (ADS)

    Castoldi, A.; Guazzoni, C.; Maffessanti, S.; Montemurro, G. V.; Carraresi, L.

    2015-01-01

    CMOS image sensors are widely used in several applications such as mobile handsets webcams and digital cameras among others. Furthermore they are available across a wide range of resolutions with excellent spectral and chromatic responses. In order to fulfill the need of cheap systems as beam monitors and high resolution image sensors for scientific applications we exploited the possibility of using commercial CMOS image sensors as X-rays and proton detectors. Two different sensors have been mounted and tested. An Aptina MT9v034, featuring 752 × 480 pixels, 6μm × 6μm pixel size has been mounted and successfully tested as bi-dimensional beam profile monitor, able to take pictures of the incoming proton bunches at the DeFEL beamline (1-6 MeV pulsed proton beam) of the LaBeC of INFN in Florence. The naked sensor is able to successfully detect the interactions of the single protons. The sensor point-spread-function (PSF) has been qualified with 1MeV protons and is equal to one pixel (6 mm) r.m.s. in both directions. A second sensor MT9M032, featuring 1472 × 1096 pixels, 2.2 × 2.2 μm pixel size has been mounted on a dedicated board as high-resolution imager to be used in X-ray imaging experiments with table-top generators. In order to ease and simplify the data transfer and the image acquisition the system is controlled by a dedicated micro-processor board (DM3730 1GHz SoC ARM Cortex-A8) on which a modified LINUX kernel has been implemented. The paper presents the architecture of the sensor systems and the results of the experimental measurements.

  13. Design framework for a spectral mask for a plenoptic camera

    NASA Astrophysics Data System (ADS)

    Berkner, Kathrin; Shroff, Sapna A.

    2012-01-01

    Plenoptic cameras are designed to capture different combinations of light rays from a scene, sampling its lightfield. Such camera designs capturing directional ray information enable applications such as digital refocusing, rotation, or depth estimation. Only few address capturing spectral information of the scene. It has been demonstrated that by modifying a plenoptic camera with a filter array containing different spectral filters inserted in the pupil plane of the main lens, sampling of the spectral dimension of the plenoptic function is performed. As a result, the plenoptic camera is turned into a single-snapshot multispectral imaging system that trades-off spatial with spectral information captured with a single sensor. Little work has been performed so far on analyzing diffraction effects and aberrations of the optical system on the performance of the spectral imager. In this paper we demonstrate simulation of a spectrally-coded plenoptic camera optical system via wave propagation analysis, evaluate quality of the spectral measurements captured at the detector plane, and demonstrate opportunities for optimization of the spectral mask for a few sample applications.

  14. Report on International Spaceborne Imaging Spectroscopy Technical Committee Calibration and Validation Workshop, National Environment Research Council Field Spectroscopy Facility, University of Edinburgh

    NASA Technical Reports Server (NTRS)

    Ong, C,; Mueller, A.; Thome, K.; Bachmann, M.; Czapla-Myers, J.; Holzwarth, S.; Khalsa, S. J.; Maclellan, C.; Malthus, T.; Nightingale, J.; hide

    2016-01-01

    Calibration and validation are fundamental for obtaining quantitative information from Earth Observation (EO) sensor data. Recognising this and the impending launch of at least five sensors in the next five years, the International Spaceborne Imaging Spectroscopy Technical Committee instigated a calibration and validation initiative. A workshop was conducted recently as part of this initiative with the objective of establishing a good practice framework for radiometric and spectral calibration and validation in support of spaceborne imaging spectroscopy missions. This paper presents the outcomes and recommendations for future work arising from the workshop.

  15. Snapshot hyperspectral fovea vision system (HyperVideo)

    NASA Astrophysics Data System (ADS)

    Kriesel, Jason; Scriven, Gordon; Gat, Nahum; Nagaraj, Sheela; Willson, Paul; Swaminathan, V.

    2012-06-01

    The development and demonstration of a new snapshot hyperspectral sensor is described. The system is a significant extension of the four dimensional imaging spectrometer (4DIS) concept, which resolves all four dimensions of hyperspectral imaging data (2D spatial, spectral, and temporal) in real-time. The new sensor, dubbed "4×4DIS" uses a single fiber optic reformatter that feeds into four separate, miniature visible to near-infrared (VNIR) imaging spectrometers, providing significantly better spatial resolution than previous systems. Full data cubes are captured in each frame period without scanning, i.e., "HyperVideo". The current system operates up to 30 Hz (i.e., 30 cubes/s), has 300 spectral bands from 400 to 1100 nm (~2.4 nm resolution), and a spatial resolution of 44×40 pixels. An additional 1.4 Megapixel video camera provides scene context and effectively sharpens the spatial resolution of the hyperspectral data. Essentially, the 4×4DIS provides a 2D spatially resolved grid of 44×40 = 1760 separate spectral measurements every 33 ms, which is overlaid on the detailed spatial information provided by the context camera. The system can use a wide range of off-the-shelf lenses and can either be operated so that the fields of view match, or in a "spectral fovea" mode, in which the 4×4DIS system uses narrow field of view optics, and is cued by a wider field of view context camera. Unlike other hyperspectral snapshot schemes, which require intensive computations to deconvolve the data (e.g., Computed Tomographic Imaging Spectrometer), the 4×4DIS requires only a linear remapping, enabling real-time display and analysis. The system concept has a range of applications including biomedical imaging, missile defense, infrared counter measure (IRCM) threat characterization, and ground based remote sensing.

  16. Implications of sensor design for coral reef detection: Upscaling ground hyperspectral imagery in spatial and spectral scales

    NASA Astrophysics Data System (ADS)

    Caras, Tamir; Hedley, John; Karnieli, Arnon

    2017-12-01

    Remote sensing offers a potential tool for large scale environmental surveying and monitoring. However, remote observations of coral reefs are difficult especially due to the spatial and spectral complexity of the target compared to sensor specifications as well as the environmental implications of the water medium above. The development of sensors is driven by technological advances and the desired products. Currently, spaceborne systems are technologically limited to a choice between high spectral resolution and high spatial resolution, but not both. The current study explores the dilemma of whether future sensor design for marine monitoring should prioritise on improving their spatial or spectral resolution. To address this question, a spatially and spectrally resampled ground-level hyperspectral image was used to test two classification elements: (1) how the tradeoff between spatial and spectral resolutions affects classification; and (2) how a noise reduction by majority filter might improve classification accuracy. The studied reef, in the Gulf of Aqaba (Eilat), Israel, is heterogeneous and complex so the local substrate patches are generally finer than currently available imagery. Therefore, the tested spatial resolution was broadly divided into four scale categories from five millimeters to one meter. Spectral resolution resampling aimed to mimic currently available and forthcoming spaceborne sensors such as (1) Environmental Mapping and Analysis Program (EnMAP) that is characterized by 25 bands of 6.5 nm width; (2) VENμS with 12 narrow bands; and (3) the WorldView series with broadband multispectral resolution. Results suggest that spatial resolution should generally be prioritized for coral reef classification because the finer spatial scale tested (pixel size < 0.1 m) may compensate for some low spectral resolution drawbacks. In this regard, it is shown that the post-classification majority filtering substantially improves the accuracy of all pixel sizes up to the point where the kernel size reaches the average unit size (pixel < 0.25 m). However, careful investigation as to the effect of band distribution and choice could improve the sensor suitability for the marine environment task. This in mind, while the focus in this study was on the technologically limited spaceborne design, aerial sensors may presently provide an opportunity to implement the suggested setup.

  17. Differentiating Biological Colours with Few and Many Sensors: Spectral Reconstruction with RGB and Hyperspectral Cameras

    PubMed Central

    Garcia, Jair E.; Girard, Madeline B.; Kasumovic, Michael; Petersen, Phred; Wilksch, Philip A.; Dyer, Adrian G.

    2015-01-01

    Background The ability to discriminate between two similar or progressively dissimilar colours is important for many animals as it allows for accurately interpreting visual signals produced by key target stimuli or distractor information. Spectrophotometry objectively measures the spectral characteristics of these signals, but is often limited to point samples that could underestimate spectral variability within a single sample. Algorithms for RGB images and digital imaging devices with many more than three channels, hyperspectral cameras, have been recently developed to produce image spectrophotometers to recover reflectance spectra at individual pixel locations. We compare a linearised RGB and a hyperspectral camera in terms of their individual capacities to discriminate between colour targets of varying perceptual similarity for a human observer. Main Findings (1) The colour discrimination power of the RGB device is dependent on colour similarity between the samples whilst the hyperspectral device enables the reconstruction of a unique spectrum for each sampled pixel location independently from their chromatic appearance. (2) Uncertainty associated with spectral reconstruction from RGB responses results from the joint effect of metamerism and spectral variability within a single sample. Conclusion (1) RGB devices give a valuable insight into the limitations of colour discrimination with a low number of photoreceptors, as the principles involved in the interpretation of photoreceptor signals in trichromatic animals also apply to RGB camera responses. (2) The hyperspectral camera architecture provides means to explore other important aspects of colour vision like the perception of certain types of camouflage and colour constancy where multiple, narrow-band sensors increase resolution. PMID:25965264

  18. Estimating Achievable Accuracy for Global Imaging Spectroscopy Measurement of Non-Photosynthetic Vegetation Cover

    NASA Astrophysics Data System (ADS)

    Dennison, P. E.; Kokaly, R. F.; Daughtry, C. S. T.; Roberts, D. A.; Thompson, D. R.; Chambers, J. Q.; Nagler, P. L.; Okin, G. S.; Scarth, P.

    2016-12-01

    Terrestrial vegetation is dynamic, expressing seasonal, annual, and long-term changes in response to climate and disturbance. Phenology and disturbance (e.g. drought, insect attack, and wildfire) can result in a transition from photosynthesizing "green" vegetation to non-photosynthetic vegetation (NPV). NPV cover can include dead and senescent vegetation, plant litter, agricultural residues, and non-photosynthesizing stem tissue. NPV cover is poorly captured by conventional remote sensing vegetation indices, but it is readily separable from substrate cover based on spectral absorption features in the shortwave infrared. We will present past research motivating the need for global NPV measurements, establishing that mapping seasonal NPV cover is critical for improving our understanding of ecosystem function and carbon dynamics. We will also present new research that helps determine a best achievable accuracy for NPV cover estimation. To test the sensitivity of different NPV cover estimation methods, we simulated satellite imaging spectrometer data using field spectra collected over mixtures of NPV, green vegetation, and soil substrate. We incorporated atmospheric transmittance and modeled sensor noise to create simulated spectra with spectral resolutions ranging from 10 to 30 nm. We applied multiple methods of NPV estimation to the simulated spectra, including spectral indices, spectral feature analysis, multiple endmember spectral mixture analysis, and partial least squares regression, and compared the accuracy and bias of each method. These results prescribe sensor characteristics for an imaging spectrometer mission with NPV measurement capabilities, as well as a "Quantified Earth Science Objective" for global measurement of NPV cover. Copyright 2016, all rights reserved.

  19. Method of radiometric quality assessment of NIR images acquired with a custom sensor mounted on an unmanned aerial vehicle

    NASA Astrophysics Data System (ADS)

    Wierzbicki, Damian; Fryskowska, Anna; Kedzierski, Michal; Wojtkowska, Michalina; Delis, Paulina

    2018-01-01

    Unmanned aerial vehicles are suited to various photogrammetry and remote sensing missions. Such platforms are equipped with various optoelectronic sensors imaging in the visible and infrared spectral ranges and also thermal sensors. Nowadays, near-infrared (NIR) images acquired from low altitudes are often used for producing orthophoto maps for precision agriculture among other things. One major problem results from the application of low-cost custom and compact NIR cameras with wide-angle lenses introducing vignetting. In numerous cases, such cameras acquire low radiometric quality images depending on the lighting conditions. The paper presents a method of radiometric quality assessment of low-altitude NIR imagery data from a custom sensor. The method utilizes statistical analysis of NIR images. The data used for the analyses were acquired from various altitudes in various weather and lighting conditions. An objective NIR imagery quality index was determined as a result of the research. The results obtained using this index enabled the classification of images into three categories: good, medium, and low radiometric quality. The classification makes it possible to determine the a priori error of the acquired images and assess whether a rerun of the photogrammetric flight is necessary.

  20. Novel snapshot hyperspectral imager for fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Chandler, Lynn; Chandler, Andrea; Periasamy, Ammasi

    2018-02-01

    Hyperspectral imaging has emerged as a new technique for the identification and classification of biological tissue1. Benefitting recent developments in sensor technology, the new class of hyperspectral imagers can capture entire hypercubes with single shot operation and it shows great potential for real-time imaging in biomedical sciences. This paper explores the use of a SnapShot imager in fluorescence imaging via microscope for the very first time. Utilizing the latest imaging sensor, the Snapshot imager is both compact and attachable via C-mount to any commercially available light microscope. Using this setup, fluorescence hypercubes of several cells were generated, containing both spatial and spectral information. The fluorescence images were acquired with one shot operation for all the emission range from visible to near infrared (VIS-IR). The paper will present the hypercubes obtained images from example tissues (475-630nm). This study demonstrates the potential of application in cell biology or biomedical applications for real time monitoring.

  1. Landsat 7 thermal-IR image sharpening using an artificial neural network and sensor model

    USGS Publications Warehouse

    Lemeshewsky, G.P.; Schowengerdt, R.A.; ,

    2001-01-01

    The enhanced thematic mapper (plus) (ETM+) instrument on Landsat 7 shares the same basic design as the TM sensors on Landsats 4 and 5, with some significant improvements. In common are six multispectral bands with a 30-m ground-projected instantaneous field of view (GIFOV). However, the thermaL-IR (TIR) band now has a 60-m GIFOV, instead of 120-m. Also, a 15-m panchromatic band has been added. The artificial neural network (NN) image sharpening method described here uses data from the higher spatial resolution ETM+ bands to enhance (sharpen) the spatial resolution of the TIR imagery. It is based on an assumed correlation over multiple scales of resolution, between image edge contrast patterns in the TIR band and several other spectral bands. A multilayer, feedforward NN is trained to approximate TIR data at 60m, given degraded (from 30-m to 60-m) spatial resolution input from spectral bands 7, 5, and 2. After training, the NN output for full-resolution input generates an approximation of a TIR image at 30-m resolution. Two methods are used to degrade the spatial resolution of the imagery used for NN training, and the corresponding sharpening results are compared. One degradation method uses a published sensor transfer function (TF) for Landsat 5 to simulate sensor coarser resolution imagery from higher resolution imagery. For comparison, the second degradation method is simply Gaussian low pass filtering and subsampling, wherein the Gaussian filter approximates the full width at half maximum amplitude characteristics of the TF-based spatial filter. Two fixed-size NNs (that is, number of weights and processing elements) were trained separately with the degraded resolution data, and the sharpening results compared. The comparison evaluates the relative influence of the degradation technique employed and whether or not it is desirable to incorporate a sensor TF model. Preliminary results indicate some improvements for the sensor model-based technique. Further evaluation using a higher resolution reference image and strict application of sensor model to data is recommended.

  2. Hyperspectral Imaging of Forest Resources: The Malaysian Experience

    NASA Astrophysics Data System (ADS)

    Mohd Hasmadi, I.; Kamaruzaman, J.

    2008-08-01

    Remote sensing using satellite and aircraft images are well established technology. Remote sensing application of hyperspectral imaging, however, is relatively new to Malaysian forestry. Through a wide range of wavelengths hyperspectral data are precisely capable to capture narrow bands of spectra. Airborne sensors typically offer greatly enhanced spatial and spectral resolution over their satellite counterparts, and able to control experimental design closely during image acquisition. The first study using hyperspectral imaging for forest inventory in Malaysia were conducted by Professor Hj. Kamaruzaman from the Faculty of Forestry, Universiti Putra Malaysia in 2002 using the AISA sensor manufactured by Specim Ltd, Finland. The main objective has been to develop methods that are directly suited for practical tropical forestry application at the high level of accuracy. Forest inventory and tree classification including development of single spectral signatures have been the most important interest at the current practices. Experiences from the studies showed that retrieval of timber volume and tree discrimination using this system is well and some or rather is better than other remote sensing methods. This article reviews the research and application of airborne hyperspectral remote sensing for forest survey and assessment in Malaysia.

  3. Examining the strength of the newly-launched Sentinel 2 MSI sensor in detecting and discriminating subtle differences between C3 and C4 grass species

    NASA Astrophysics Data System (ADS)

    Shoko, C.; Mutanga, O.

    2017-07-01

    C3 and C4 grass species discrimination has increasingly become relevant in understanding their response to environmental changes and to monitor their integrity in providing goods and services. While remotely-sensed data provide robust, cost-effective and repeatable monitoring tools for C3 and C4 grasses, this has been largely limited by the scarcity of sensors with better earth imaging characteristics. The recent launch of the advanced Sentinel 2 MultiSpectral Instrument (MSI) presents a new prospect for discriminating C3 and C4 grasses. The present study tested the potential of Sentinel 2, characterized by refined spatial resolution and more unique spectral bands in discriminating between Festuca (C3) and Themeda (C4) grasses. To evaluate the performance of Sentinel 2 MSI; spectral bands, vegetation indices and spectral bands plus indices were used. Findings from Sentinel 2 were compared with those derived from the widely-used Worldview 2 commercial sensor and the Landsat 8 Operational Land Imager (OLI). Overall classification accuracies have shown that Sentinel 2 bands have potential (90.36%), than indices (85.54%) and combined variables (88.61%). The results were comparable to Worldview 2 sensor, which produced slightly higher accuracies using spectral bands (95.69%), indices (86.02%) and combined variables (87.09%), and better than Landsat 8 OLI spectral bands (75.26%), indices (82.79%) and combined variables (86.02%). Sentinel 2 bands produced lower errors of commission and omission (between 4.76 and 14.63%), comparable to Worldview 2 (between 1.96 and 7.14%), than Landsat 8 (between 18.18 and 30.61%), when classifying the two species. The classification accuracy from Sentinel 2 also did not differ significantly (z = 1.34) from Worldview 2, using standard bands; it was significantly (z > 1.96) different using indices and combined variables, whereas when compared to Landsat 8, Sentinel 2 accuracies were significantly different (z > 1.96) using all variables. These results demonstrated that key vegetation species discrimination could be improved by the use of the freely and improved Sentinel 2 MSI data.

  4. Vicarious Calibration of EO-1 Hyperion

    NASA Technical Reports Server (NTRS)

    McCorkel, Joel; Thome, Kurt; Lawrence, Ong

    2012-01-01

    The Hyperion imaging spectrometer on the Earth Observing-1 satellite is the first high-spatial resolution imaging spectrometer to routinely acquire science-grade data from orbit. Data gathered with this instrument needs to be quantitative and accurate in order to derive meaningful information about ecosystem properties and processes. Also, comprehensive and long-term ecological studies require these data to be comparable over time, between coexisting sensors and between generations of follow-on sensors. One method to assess the radiometric calibration is the reflectance-based approach, a common technique used for several other earth science sensors covering similar spectral regions. This work presents results of radiometric calibration of Hyperion based on the reflectance-based approach of vicarious calibration implemented by University of Arizona during 2001 2005. These results show repeatability to the 2% level and accuracy on the 3 5% level for spectral regions not affected by strong atmospheric absorption. Knowledge of the stability of the Hyperion calibration from moon observations allows for an average absolute calibration based on the reflectance-based results to be determined and applicable for the lifetime of Hyperion.

  5. Pushbroom Hyperspectral Imaging from AN Unmanned Aircraft System (uas) - Geometric Processingworkflow and Accuracy Assessment

    NASA Astrophysics Data System (ADS)

    Turner, D.; Lucieer, A.; McCabe, M.; Parkes, S.; Clarke, I.

    2017-08-01

    In this study, we assess two push broom hyperspectral sensors as carried by small (10-15 kg) multi-rotor Unmanned Aircraft Systems (UAS). We used a Headwall Photonics micro-Hyperspec push broom sensor with 324 spectral bands (4-5 nm FWHM) and a Headwall Photonics nano-Hyperspec sensor with 270 spectral bands (6 nm FWHM) both in the VNIR spectral range (400-1000 nm). A gimbal was used to stabilise the sensors in relation to the aircraft flight dynamics, and for the micro-Hyperspec a tightly coupled dual frequency Global Navigation Satellite System (GNSS) receiver, an Inertial Measurement Unit (IMU), and Machine Vision Camera (MVC) were used for attitude and position determination. For the nano-Hyperspec, a navigation grade GNSS system and IMU provided position and attitude data. This study presents the geometric results of one flight over a grass oval on which a dense Ground Control Point (GCP) network was deployed. The aim being to ascertain the geometric accuracy achievable with the system. Using the PARGE software package (ReSe - Remote Sensing Applications) we ortho-rectify the push broom hyperspectral image strips and then quantify the accuracy of the ortho-rectification by using the GCPs as check points. The orientation (roll, pitch, and yaw) of the sensor is measured by the IMU. Alternatively imagery from a MVC running at 15 Hz, with accurate camera position data can be processed with Structure from Motion (SfM) software to obtain an estimated camera orientation. In this study, we look at which of these data sources will yield a flight strip with the highest geometric accuracy.

  6. High Density Schottky Barrier Infrared Charge-Coupled Device (IRCCD) Sensors For Short Wavelength Infrared (SWIR) Applications At Intermediate Temperature

    NASA Astrophysics Data System (ADS)

    Elabd, H.; Villani, T. S.; Tower, J. R.

    1982-11-01

    Monolithic 32 x 64 and 64 x 128 palladium silicide (Pd2Si) interline transfer IRCCDs sensitive in the 1-3.5 pm spectral band have been developed. This silicon imager exhibits a low response nonuniformity of typically 0.2-1.6% rms, and has been operated in the temperature range between 40-140K. Spectral response measurements of test Pd2Si p-type Si devices yield quantum efficiencies of 7.9% at 1.25 μm, 5.6% at 1.65 μm and 2.2% at 2.22 μm. Improvement in quantum efficiency is expected by optimizing the different structural parameters of the Pd2Si detectors. The spectral response of the Pd2Si detectors fit a modified Fowler emission model. The measured photo-electric barrier height for the Pd2Si detector is ≍0.34 eV and the measured quantum efficiency coefficient, C1, is 19%/eV. The dark current level of Pd2Si Schottky barrier focal plane arrays (FPAs) is sufficiently low to enable operation at intermediate tem-peratures at TV frame rates. Typical dark current level measured at 120K on the FPA is 2 nA/cm2. The Pd2Si Schottky barrier imaging technology has been developed for satellite sensing of earth resources. The operating temperature of the Pd2Si FPA is compatible with passive cooler performance. In addition, high density Pd2Si Schottky barrier FPAs are manufactured with high yield and therefore represent an economical approach to short wavelength IR imaging. A Pd2Si Schottky barrier image sensor for push-broom multispectral imaging in the 1.25, 1.65, and 2.22 μm bands is being studied. The sensor will have two line arrays (dual band capability) of 512 detectors each, with 30 μm center-to-center detector spacing. The device will be suitable for chip-to-chip abutment, thus providing the capability to produce large, multiple chip focal planes with contiguous, in-line sensors.

  7. Mineral mapping and applications of imaging spectroscopy

    USGS Publications Warehouse

    Clark, R.N.; Boardman, J.; Mustard, J.; Kruse, F.; Ong, C.; Pieters, C.; Swayze, G.A.

    2006-01-01

    Spectroscopy is a tool that has been used for decades to identify, understand, and quantify solid, liquid, or gaseous materials, especially in the laboratory. In disciplines ranging from astronomy to chemistry, spectroscopic measurements are used to detect absorption and emission features due to specific chemical bonds, and detailed analyses are used to determine the abundance and physical state of the detected absorbing/emitting species. Spectroscopic measurements have a long history in the study of the Earth and planets. Up to the 1990s remote spectroscopic measurements of Earth and planets were dominated by multispectral imaging experiments that collect high-quality images in a few, usually broad, spectral bands or with point spectrometers that obtained good spectral resolution but at only a few spatial positions. However, a new generation of sensors is now available that combines imaging with spectroscopy to create the new discipline of imaging spectroscopy. Imaging spectrometers acquire data with enough spectral range, resolution, and sampling at every pixel in a raster image so that individual absorption features can be identified and spatially mapped (Goetz et al., 1985).

  8. Hyperspectral image compressing using wavelet-based method

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  9. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

  10. Modelling of celestial backgrounds

    NASA Astrophysics Data System (ADS)

    Hickman, Duncan L.; Smith, Moira I.; Lim, Jae-Wan; Jeon, Yun-Ho

    2018-05-01

    For applications where a sensor's image includes the celestial background, stars and Solar System Bodies compromise the ability of the sensor system to correctly classify a target. Such false targets are particularly significant for the detection of weak target signatures which only have a small relative angular motion. The detection of celestial features is well established in the visible spectral band. However, given the increasing sensitivity and low noise afforded by emergent infrared focal plane array technology together with larger and more efficient optics, the signatures of celestial features can also impact performance at infrared wavelengths. A methodology has been developed which allows the rapid generation of celestial signatures in any required spectral band using star data from star catalogues and other open-source information. Within this paper, the radiometric calculations are presented to determine the irradiance values of stars and planets in any spectral band.

  11. Compact characterization of liquid absorption and emission spectra using linear variable filters integrated with a CMOS imaging camera.

    PubMed

    Wan, Yuhang; Carlson, John A; Kesler, Benjamin A; Peng, Wang; Su, Patrick; Al-Mulla, Saoud A; Lim, Sung Jun; Smith, Andrew M; Dallesasse, John M; Cunningham, Brian T

    2016-07-08

    A compact analysis platform for detecting liquid absorption and emission spectra using a set of optical linear variable filters atop a CMOS image sensor is presented. The working spectral range of the analysis platform can be extended without a reduction in spectral resolution by utilizing multiple linear variable filters with different wavelength ranges on the same CMOS sensor. With optical setup reconfiguration, its capability to measure both absorption and fluorescence emission is demonstrated. Quantitative detection of fluorescence emission down to 0.28 nM for quantum dot dispersions and 32 ng/mL for near-infrared dyes has been demonstrated on a single platform over a wide spectral range, as well as an absorption-based water quality test, showing the versatility of the system across liquid solutions for different emission and absorption bands. Comparison with a commercially available portable spectrometer and an optical spectrum analyzer shows our system has an improved signal-to-noise ratio and acceptable spectral resolution for discrimination of emission spectra, and characterization of colored liquid's absorption characteristics generated by common biomolecular assays. This simple, compact, and versatile analysis platform demonstrates a path towards an integrated optical device that can be utilized for a wide variety of applications in point-of-use testing and point-of-care diagnostics.

  12. Parallel evolution of image processing tools for multispectral imagery

    NASA Astrophysics Data System (ADS)

    Harvey, Neal R.; Brumby, Steven P.; Perkins, Simon J.; Porter, Reid B.; Theiler, James P.; Young, Aaron C.; Szymanski, John J.; Bloch, Jeffrey J.

    2000-11-01

    We describe the implementation and performance of a parallel, hybrid evolutionary-algorithm-based system, which optimizes image processing tools for feature-finding tasks in multi-spectral imagery (MSI) data sets. Our system uses an integrated spatio-spectral approach and is capable of combining suitably-registered data from different sensors. We investigate the speed-up obtained by parallelization of the evolutionary process via multiple processors (a workstation cluster) and develop a model for prediction of run-times for different numbers of processors. We demonstrate our system on Landsat Thematic Mapper MSI , covering the recent Cerro Grande fire at Los Alamos, NM, USA.

  13. Imaging using a supercontinuum laser to assess tumors in patients with breast carcinoma

    NASA Astrophysics Data System (ADS)

    Sordillo, Laura A.; Sordillo, Peter P.; Alfano, R. R.

    2016-03-01

    The supercontinuum laser light source has many advantages over other light sources, including broad spectral range. Transmission images of paired normal and malignant breast tissue samples from two patients were obtained using a Leukos supercontinuum (SC) laser light source with wavelengths in the second and third NIR optical windows and an IR- CCD InGaAs camera detector (Goodrich Sensors Inc. high response camera SU320KTSW-1.7RT with spectral response between 900 nm and 1,700 nm). Optical attenuation measurements at the four NIR optical windows were obtained from the samples.

  14. Improvements in Virtual Sensors: Using Spatial Information to Estimate Remote Sensing Spectra

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.; Srivastava, Ashok N.; Stroeve, Julienne

    2005-01-01

    Various instruments are used to create images of the Earth and other objects in the universe in a diverse set of wavelength bands with the aim of understanding natural phenomena. Sometimes these instruments are built in a phased approach, with additional measurement capabilities added in later phases. In other cases, technology may mature to the point that the instrument offers new measurement capabilities that were not planned in the original design of the instrument. In still other cases, high resolution spectral measurements may be too costly to perform on a large sample and therefore lower resolution spectral instruments are used to take the majority of measurements. Many applied science questions that are relevant to the earth science remote sensing community require analysis of enormous amounts of data that were generated by instruments with disparate measurement capabilities. In past work [1], we addressed this problem using Virtual Sensors: a method that uses models trained on spectrally rich (high spectral resolution) data to "fill in" unmeasured spectral channels in spectrally poor (low spectral resolution) data. We demonstrated this method by using models trained on the high spectral resolution Terra MODIS instrument to estimate what the equivalent of the MODIS 1.6 micron channel would be for the NOAA AVHRR2 instrument. The scientific motivation for the simulation of the 1.6 micron channel is to improve the ability of the AVHRR2 sensor to detect clouds over snow and ice. This work contains preliminary experiments demonstrating that the use of spatial information can improve our ability to estimate these spectra.

  15. An Overview of the Landsat Data Continuity Mission

    NASA Technical Reports Server (NTRS)

    Irons, James R.; Dwyer, John L.

    2010-01-01

    The advent of the Landsat Data Continuity Mission (LDCM), currently with a launch readiness date of December, 2012, will see evolutionary changes in the Landsat data products available from the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center. The USGS initiated a revolution in 2009 when EROS began distributing Landsat data products at no cost to requestors in contrast to the past practice of charging the cost of fulfilling a request; that is, charging $600 per Landsat scene. To implement this drastic change, EROS terminated data processing options for requestors and began to produce all data products using a consistent processing recipe. EROS plans to continue this practice for the LDCM and will required new algorithms to process data from the LDCM sensors. All previous Landsat satellites flew multispectral scanners to collect image data of the global land surface. Additionally, Landsats 4, 5, and 7 flew sensors that acquired imagery for both reflective spectral bands and a single thermal band. In contrast, the LDCM will carry two pushbroom sensors; the Operational Land Imager (OLI) for reflective spectral bands and the Thermal InfraRed Sensor (TIRS) for two thermal bands. EROS is developing the ground data processing system that will both calibrate and correct the data from the thousands of detectors employed by the pushbroom sensors and that will also combine the data from the two sensors to create a single data product with registered data for all of the OLI and TIRS bands.

  16. Joint spatial-spectral hyperspectral image clustering using block-diagonal amplified affinity matrix

    NASA Astrophysics Data System (ADS)

    Fan, Lei; Messinger, David W.

    2018-03-01

    The large number of spectral channels in a hyperspectral image (HSI) produces a fine spectral resolution to differentiate between materials in a scene. However, difficult classes that have similar spectral signatures are often confused while merely exploiting information in the spectral domain. Therefore, in addition to spectral characteristics, the spatial relationships inherent in HSIs should also be considered for incorporation into classifiers. The growing availability of high spectral and spatial resolution of remote sensors provides rich information for image clustering. Besides the discriminating power in the rich spectrum, contextual information can be extracted from the spatial domain, such as the size and the shape of the structure to which one pixel belongs. In recent years, spectral clustering has gained popularity compared to other clustering methods due to the difficulty of accurate statistical modeling of data in high dimensional space. The joint spatial-spectral information could be effectively incorporated into the proximity graph for spectral clustering approach, which provides a better data representation by discovering the inherent lower dimensionality from the input space. We embedded both spectral and spatial information into our proposed local density adaptive affinity matrix, which is able to handle multiscale data by automatically selecting the scale of analysis for every pixel according to its neighborhood of the correlated pixels. Furthermore, we explored the "conductivity method," which aims at amplifying the block diagonal structure of the affinity matrix to further improve the performance of spectral clustering on HSI datasets.

  17. Hyperspectral fundus imager

    NASA Astrophysics Data System (ADS)

    Truitt, Paul W.; Soliz, Peter; Meigs, Andrew D.; Otten, Leonard John, III

    2000-11-01

    A Fourier Transform hyperspectral imager was integrated onto a standard clinical fundus camera, a Zeiss FF3, for the purposes of spectrally characterizing normal anatomical and pathological features in the human ocular fundus. To develop this instrument an existing FDA approved retinal camera was selected to avoid the difficulties of obtaining new FDA approval. Because of this, several unusual design constraints were imposed on the optical configuration. Techniques to calibrate the sensor and to define where the hyperspectral pushbroom stripe was located on the retina were developed, including the manufacturing of an artificial eye with calibration features suitable for a spectral imager. In this implementation the Fourier transform hyperspectral imager can collect over a hundred 86 cm-1 spectrally resolved bands with 12 micro meter/pixel spatial resolution within the 1050 nm to 450 nm band. This equates to 2 nm to 8 nm spectral resolution depending on the wavelength. For retinal observations the band of interest tends to lie between 475 nm and 790 nm. The instrument has been in use over the last year successfully collecting hyperspectral images of the optic disc, retinal vessels, choroidal vessels, retinal backgrounds, and macula diabetic macular edema, and lesions of age-related macular degeneration.

  18. Spectral signature variations, atmospheric scintillations and sensor parameters

    NASA Astrophysics Data System (ADS)

    Berger, Henry; Neander, John

    2002-11-01

    The spectral signature of a material is the curve of power density vs. wavelength (λ) obtained from measurements of reflected light. It is used, among other things, for the identification of targets in remotely acquired images. Sometimes, however, unpredictable distortions may prevent this. In only a few cases have such distortions been explained. We propose some reasonable arguments that in a significant number of circumstances, atmospheric turbulence may contribute to such spectral signature distortion. We propose, based on this model, what appears to be one method that could combat such distortion.

  19. Color sensitivity of the multi-exposure HDR imaging process

    NASA Astrophysics Data System (ADS)

    Lenseigne, Boris; Jacobs, Valéry Ann; Withouck, Martijn; Hanselaer, Peter; Jonker, Pieter P.

    2013-04-01

    Multi-exposure high dynamic range(HDR) imaging builds HDR radiance maps by stitching together different views of a same scene with varying exposures. Practically, this process involves converting raw sensor data into low dynamic range (LDR) images, estimate the camera response curves, and use them in order to recover the irradiance for every pixel. During the export, applying white balance settings and image stitching, which both have an influence on the color balance in the final image. In this paper, we use a calibrated quasi-monochromatic light source, an integrating sphere, and a spectrograph in order to evaluate and compare the average spectral response of the image sensor. We finally draw some conclusion about the color consistency of HDR imaging and the additional steps necessary to use multi-exposure HDR imaging as a tool to measure the physical quantities such as radiance and luminance.

  20. JPSS-1 VIIRS Pre-Launch Radiometric Performance

    NASA Technical Reports Server (NTRS)

    Oudrari, Hassan; Mcintire, Jeffrey; Xiong, Xiaoxiong; Butler, James; Ji, Qiang; Schwarting, Tom; Zeng, Jinan

    2015-01-01

    The first Joint Polar Satellite System (JPSS-1 or J1) mission is scheduled to launch in January 2017, and will be very similar to the Suomi-National Polar-orbiting Partnership (SNPP) mission. The Visible Infrared Imaging Radiometer Suite (VIIRS) on board the J1 spacecraft completed its sensor level performance testing in December 2014. VIIRS instrument is expected to provide valuable information about the Earth environment and properties on a daily basis, using a wide-swath (3,040 km) cross-track scanning radiometer. The design covers the wavelength spectrum from reflective to long-wave infrared through 22 spectral bands, from 0.412 m to 12.01 m, and has spatial resolutions of 370 m and 740 m at nadir for imaging and moderate bands, respectively. This paper will provide an overview of pre-launch J1 VIIRS performance testing and methodologies, describing the at-launch baseline radiometric performance as well as the metrics needed to calibrate the instrument once on orbit. Key sensor performance metrics include the sensor signal to noise ratios (SNRs), dynamic range, reflective and emissive bands calibration performance, polarization sensitivity, bands spectral performance, response-vs-scan (RVS), near field response, and stray light rejection. A set of performance metrics generated during the pre-launch testing program will be compared to the sensor requirements and to SNPP VIIRS pre-launch performance.

  1. Fast and compact internal scanning CMOS-based hyperspectral camera: the Snapscan

    NASA Astrophysics Data System (ADS)

    Pichette, Julien; Charle, Wouter; Lambrechts, Andy

    2017-02-01

    Imec has developed a process for the monolithic integration of optical filters on top of CMOS image sensors, leading to compact, cost-efficient and faster hyperspectral cameras. Linescan cameras are typically used in remote sensing or for conveyor belt applications. Translation of the target is not always possible for large objects or in many medical applications. Therefore, we introduce a novel camera, the Snapscan (patent pending), exploiting internal movement of a linescan sensor enabling fast and convenient acquisition of high-resolution hyperspectral cubes (up to 2048x3652x150 in spectral range 475-925 nm). The Snapscan combines the spectral and spatial resolutions of a linescan system with the convenience of a snapshot camera.

  2. Optimum thermal infrared bands for mapping general rock type and temperature from space

    NASA Technical Reports Server (NTRS)

    Holmes, Q. A.; Nueesch, D. R.; Vincent, R. K.

    1980-01-01

    A study was carried out to determine quantitatively the number and location of spectral bands required to perform general rock type discrimination from spaceborne imaging sensors using only thermal infrared measurements. Beginning with laboratory spectra collected under idealized conditions from relatively well-characterized homogeneous samples, a radiative transfer model was used to transform ground exitance values into the corresponding spectral radiance at the top of the atmosphere. Taking sensor noise into account, analysis of these data revealed that three 1 micron wide spectral bands would permit independent estimations of rock type and sample temperature from a satellite infrared multispectral scanner. This study, which ignores the mixing of terrain elements within the instantaneous field of view of a satellite scanner, indicates that the location of three spectral bands at 8.1-9.1, 9.5-10.5, and 11.0-12.0 microns, and the employment of appropriate preprocessing to minimize atmospheric effects makes it possible to predict general rock type and temperature for a variety of atmospheric states and temperatures.

  3. Optimum thermal infrared bands for mapping general rock type and temperature from space

    NASA Technical Reports Server (NTRS)

    Holmes, Q. A.; Nuesch, D. R.

    1978-01-01

    A study was carried out to determine quantitatively the number and locations of spectral bands required to perform general rock-type discrimination from spaceborne imaging sensors using only thermal infrared measurements. Beginning with laboratory spectra collected under idealized conditions from relatively well characterized, homogeneous samples, a radiative transfer model was employed to transform ground exitance values into the corresponding spectral radiance at the top of the atmosphere. Taking sensor noise into account analysis of these data revealed that three 1 micrometer wide spectral bands would permit independent estimators of rock-type and sample temperature from a satellite infrared multispectral scanner. This study, indicates that the location of three spectral bands at 8.1-9.1 micrometers, 9.5-10.5 micrometers and 11.0-12.0 micrometers, and the employment of appropriate preprocessing to minimize atmospheric effects makes it possible to predict general rock-type and temperature for a variety of atmospheric states and temperatures.

  4. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo.

    PubMed

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-03-02

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.

  5. Three-Dimensional Reconstruction from Single Image Base on Combination of CNN and Multi-Spectral Photometric Stereo

    PubMed Central

    Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu

    2018-01-01

    Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703

  6. Optical flows method for lightweight agile remote sensor design and instrumentation

    NASA Astrophysics Data System (ADS)

    Wang, Chong; Xing, Fei; Wang, Hongjian; You, Zheng

    2013-08-01

    Lightweight agile remote sensors have become one type of the most important payloads and were widely utilized in space reconnaissance and resource survey. These imaging sensors are designed to obtain the high spatial, temporary and spectral resolution imageries. Key techniques in instrumentation include flexible maneuvering, advanced imaging control algorithms and integrative measuring techniques, which are closely correlative or even acting as the bottle-necks for each other. Therefore, mutual restrictive problems must be solved and optimized. Optical flow is the critical model which to be fully represented in the information transferring as well as radiation energy flowing in dynamic imaging. For agile sensors, especially with wide-field-of view, imaging optical flows may distort and deviate seriously when they perform large angle attitude maneuvering imaging. The phenomena are mainly attributed to the geometrical characteristics of the three-dimensional earth surface as well as the coupled effects due to the complicated relative motion between the sensor and scene. Under this circumstance, velocity fields distribute nonlinearly, the imageries may badly be smeared or probably the geometrical structures are changed since the image velocity matching errors are not having been eliminated perfectly. In this paper, precise imaging optical flow model is established for agile remote sensors, for which optical flows evolving is factorized by two forms, which respectively due to translational movement and image shape changing. Moreover, base on that, agile remote sensors instrumentation was investigated. The main techniques which concern optical flow modeling include integrative design with lightweight star sensors along with micro inertial measurement units and corresponding data fusion, the assemblies of focal plane layout and control, imageries post processing for agile remote sensors etc. Some experiments show that the optical analyzing method is effective to eliminate the limitations for the performance indexes, and succeeded to be applied for integrative system design. Finally, a principle prototype of agile remote sensor designed by the method is discussed.

  7. Ultraviolet Imaging with Low Cost Smartphone Sensors: Development and Application of a Raspberry Pi-Based UV Camera.

    PubMed

    Wilkes, Thomas C; McGonigle, Andrew J S; Pering, Tom D; Taggart, Angus J; White, Benjamin S; Bryant, Robert G; Willmott, Jon R

    2016-10-06

    Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV) camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS) sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements.

  8. Real-time DNA Amplification and Detection System Based on a CMOS Image Sensor.

    PubMed

    Wang, Tiantian; Devadhasan, Jasmine Pramila; Lee, Do Young; Kim, Sanghyo

    2016-01-01

    In the present study, we developed a polypropylene well-integrated complementary metal oxide semiconductor (CMOS) platform to perform the loop mediated isothermal amplification (LAMP) technique for real-time DNA amplification and detection simultaneously. An amplification-coupled detection system directly measures the photon number changes based on the generation of magnesium pyrophosphate and color changes. The photon number decreases during the amplification process. The CMOS image sensor observes the photons and converts into digital units with the aid of an analog-to-digital converter (ADC). In addition, UV-spectral studies, optical color intensity detection, pH analysis, and electrophoresis detection were carried out to prove the efficiency of the CMOS sensor based the LAMP system. Moreover, Clostridium perfringens was utilized as proof-of-concept detection for the new system. We anticipate that this CMOS image sensor-based LAMP method will enable the creation of cost-effective, label-free, optical, real-time and portable molecular diagnostic devices.

  9. SSUSI-Lite: a far-ultraviolet hyper-spectral imager for space weather remote sensing

    NASA Astrophysics Data System (ADS)

    Ogorzalek, Bernard; Osterman, Steven; Carlsson, Uno; Grey, Matthew; Hicks, John; Hourani, Ramsey; Kerem, Samuel; Marcotte, Kathryn; Parker, Charles; Paxton, Larry J.

    2015-09-01

    SSUSI-Lite is a far-ultraviolet (115-180nm) hyperspectral imager for monitoring space weather. The SSUSI and GUVI sensors, its predecessors, have demonstrated their value as space weather monitors. SSUSI-Lite is a refresh of the Special Sensor Ultraviolet Spectrographic Imager (SSUSI) design that has flown on the Defense Meteorological Satellite Program (DMSP) spacecraft F16 through F19. The refresh updates the 25-year-old design and insures that the next generation of SSUSI/GUVI sensors can be accommodated on any number of potential platforms. SSUSI-Lite maintains the same optical layout as SSUSI, includes updates to key functional elements, and reduces the sensor volume, mass, and power requirements. SSUSI-Lite contains an improved scanner design that results in precise mirror pointing and allows for variable scan profiles. The detector electronics have been redesigned to employ all digital pulse processing. The largest decrease in volume, mass, and power has been obtained by consolidating all control and power electronics into one data processing unit.

  10. a Preliminary Investigation on Comparison and Transformation of SENTINEL-2 MSI and Landsat 8 Oli

    NASA Astrophysics Data System (ADS)

    Chen, F.; Lou, S.; Fan, Q.; Li, J.; Wang, C.; Claverie, M.

    2018-05-01

    A PRELIMINARY INVESTIGATION ON COMPARISON AND TRANSFORMATION OF SENTINEL-2 MSI AND LANDSAT 8 OLI Timely and accurate earth observation with short revisit interval is usually necessary, especially for emergency response. Currently, several new generation sensors provided with similar channel characteristics have been operated onboard different satellite platforms, including Sentinel-2 and Landsat 8. Joint use of the observations by different sensors offers an opportunity to meet the demands for emergency requirements. For example, through the combination of Landsat and Sentinel-2 data, the land can be observed every 2-3 days at medium spatial resolution. However, differences are expected in radiometric values (e.g., channel reflectance) of the corresponding channels between two sensors. Spectral response function (SRF) is taken as an important aspect of sensor settings. Accordingly, between-sensor differences due to SRFs variation need to be quantified and compensated. The comparison of SRFs shows difference (more or less) in channel settings between Sentinel-2 Multi-Spectral Instrument (MSI) and Landsat 8 Operational Land Imager (OLI). Effect of the difference in SRF on corresponding values between MSI and OLI was investigated, mainly in terms of channel reflectance and several derived spectral indices. Spectra samples from ASTER Spectral Library Version 2.0 and Hyperion data archives were used in obtaining channel reflectance simulation of MSI and OLI. Preliminary results show that MSI and OLI are well comparable in several channels with small relative discrepancy (< 5 %), including the Costal Aerosol channel, a NIR (855-875 nm) channel, the SWIR channels, and the Cirrus channel. Meanwhile, for channels covering Blue, Green, Red, and NIR (785-900 nm), the between-sensor differences are significantly presented. Compared with the difference in reflectance of each individual channel, the difference in derived spectral index is more significant. In addition, effectiveness of linear transformation model is not ensured when the target belongs to another spectra collection. If an improper transformation model is selected, the between-sensor discrepancy will even largely increase. In conclusion, improvement in between-sensor consistency is possibly a challenge, through linear transformation based on model(s) generated from other spectra collections.

  11. Automatic panoramic thermal integrated sensor

    NASA Astrophysics Data System (ADS)

    Gutin, Mikhail A.; Tsui, Eddy K.; Gutin, Olga N.

    2005-05-01

    Historically, the US Army has recognized the advantages of panoramic imagers with high image resolution: increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The novel ViperViewTM high-resolution panoramic thermal imager is the heart of the Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) in support of the Future Combat Systems (FCS) and the Intelligent Munitions Systems (IMS). The APTIS is anticipated to operate as an intelligent node in a wireless network of multifunctional nodes that work together to improve situational awareness (SA) in many defense and offensive operations, as well as serve as a sensor node in tactical Intelligence Surveillance Reconnaissance (ISR). The ViperView is as an aberration-corrected omnidirectional imager with small optics designed to match the resolution of a 640x480 pixels IR camera with improved image quality for longer range target detection, classification, and tracking. The same approach is applicable to panoramic cameras working in the visible spectral range. Other components of the ATPIS sensor suite include ancillary sensors, advanced power management, and wakeup capability. This paper describes the development status of the APTIS system.

  12. Sharpening advanced land imager multispectral data using a sensor model

    USGS Publications Warehouse

    Lemeshewsky, G.P.; ,

    2005-01-01

    The Advanced Land Imager (ALI) instrument on NASA's Earth Observing One (EO-1) satellite provides for nine spectral bands at 30m ground sample distance (GSD) and a 10m GSD panchromatic band. This report describes an image sharpening technique where the higher spatial resolution information of the panchromatic band is used to increase the spatial resolution of ALI multispectral (MS) data. To preserve the spectral characteristics, this technique combines reported deconvolution deblurring methods for the MS data with highpass filter-based fusion methods for the Pan data. The deblurring process uses the point spread function (PSF) model of the ALI sensor. Information includes calculation of the PSF from pre-launch calibration data. Performance was evaluated using simulated ALI MS data generated by degrading the spatial resolution of high resolution IKONOS satellite MS data. A quantitative measure of performance was the error between sharpened MS data and high resolution reference. This report also compares performance with that of a reported method that includes PSF information. Preliminary results indicate improved sharpening with the method reported here.

  13. A multimodal image sensor system for identifying water stress in grapevines

    NASA Astrophysics Data System (ADS)

    Zhao, Yong; Zhang, Qin; Li, Minzan; Shao, Yongni; Zhou, Jianfeng; Sun, Hong

    2012-11-01

    Water stress is one of the most common limitations of fruit growth. Water is the most limiting resource for crop growth. In grapevines, as well as in other fruit crops, fruit quality benefits from a certain level of water deficit which facilitates to balance vegetative and reproductive growth and the flow of carbohydrates to reproductive structures. A multi-modal sensor system was designed to measure the reflectance signature of grape plant surfaces and identify different water stress levels in this paper. The multi-modal sensor system was equipped with one 3CCD camera (three channels in R, G, and IR). The multi-modal sensor can capture and analyze grape canopy from its reflectance features, and identify the different water stress levels. This research aims at solving the aforementioned problems. The core technology of this multi-modal sensor system could further be used as a decision support system that combines multi-modal sensory data to improve plant stress detection and identify the causes of stress. The images were taken by multi-modal sensor which could output images in spectral bands of near-infrared, green and red channel. Based on the analysis of the acquired images, color features based on color space and reflectance features based on image process method were calculated. The results showed that these parameters had the potential as water stress indicators. More experiments and analysis are needed to validate the conclusion.

  14. An Overview of Lunar Calibration and Characterization for the EOS Terra and Aqua MODIS

    NASA Technical Reports Server (NTRS)

    Xiong, X.; Salomonson, V. V.; Sun, J.; Chiang, K.; Xiong, S.; Humphries, S.; Barnes, W.; Guenther, B.

    2004-01-01

    The Moon can be used as a stable source for Earth-observing sensors on-orbit radiometric and spatial stability monitoring in the VIS and NIR spectral regions. It can also serve as a calibration transfer vehicle among multiple sensors. Nearly identical copies of the Moderate Resolution Imaging Spectroradiometer (MODE) have been operating on-board the NASA's Earth Observing System (EOS) Terra and Aqua satellites since their launches in December 1999 and May 2002, respectively. Terra and Aqua MODIS each make observations in 36 spectral bands covering the spectral range from 0.41 to 14.5 microns and are calibrated on-orbit by a set of on-board calibrations (OBCs) including: 1) a solar diffuser (SD), 2) a solar diffuser stability monitor (SDSM), 3) a blackbody (BB), and 4) a spectro-radiometric calibration assembly (SRCA). In addition to fully utilizing the OBCs, the Moon has been used extensively by both Terra and Aqua MODIS to support their on-orbit calibration and characterization. A 4 This paper provides an overview of applications of lunar calibration and characterization from the MODIS perspective, including monitoring radiometric calibration stability for the reflective solar bands (RSBs), tracking changes of the sensors response versus scan-angle (RVS), examining the sensors spatial performance , and characterizing optical leaks and electronic crosstalk among different spectral bands and detectors. On-orbit calibration consistency between the two MODIS instruments is also addressed. Based on the existing on-orbit time series of the Terra and Aqua MODIS lunar observations, the radiometric difference between the two sensors is less than +/-1% for the RSBs. This method provides a powerful means of performing calibration comparisons among Earth-observing sensors and assures consistent data and science products for the long-term studies of climate and environmental changes.

  15. Global Learning Spectral Archive- A new Way to deal with Unknown Urban Spectra -

    NASA Astrophysics Data System (ADS)

    Jilge, M.; Heiden, U.; Habermeyer, M.; Jürgens, C.

    2015-12-01

    Rapid urbanization processes and the need of identifying urban materials demand urban planners and the remote sensing community since years. Urban planners cannot overcome the issue of up-to-date information of urban materials due to time-intensive fieldwork. Hyperspectral remote sensing can facilitate this issue by interpreting spectral signals to provide information of occurring materials. However, the complexity of urban areas and the occurrence of diverse urban materials vary due to regional and cultural aspects as well as the size of a city, which makes identification of surface materials a challenging analysis task. For the various surface material identification approaches, spectral libraries containing pure material spectra are commonly used, which are derived from field, laboratory or the hyperspectral image itself. One of the requirements for successful image analysis is that all spectrally different surface materials are represented by the library. Currently, a universal library, applicable in every urban area worldwide and taking each spectral variability into account, is and will not be existent. In this study, the issue of unknown surface material spectra and the demand of an urban site-specific spectral library is tackled by the development of a learning spectral archive tool. Starting with an incomplete library of labelled image spectra from several German cities, surface materials of pure image pixels will be identified in a hyperspectral image based on a similarity measure (e.g. SID-SAM). Additionally, unknown image spectra of urban objects are identified based on an object- and spectral-based-rule set. The detected unknown surface material spectra are entered with additional metadata, such as regional occurrence into the existing spectral library and thus, are reusable for further studies. Our approach is suitable for pure surface material detection of urban hyperspectral images that is globally applicable by taking incompleteness into account. The generically development enables the implementation of different hyperspectral sensors.

  16. The CAOS camera platform: ushering in a paradigm change in extreme dynamic range imager design

    NASA Astrophysics Data System (ADS)

    Riza, Nabeel A.

    2017-02-01

    Multi-pixel imaging devices such as CCD, CMOS and Focal Plane Array (FPA) photo-sensors dominate the imaging world. These Photo-Detector Array (PDA) devices certainly have their merits including increasingly high pixel counts and shrinking pixel sizes, nevertheless, they are also being hampered by limitations in instantaneous dynamic range, inter-pixel crosstalk, quantum full well capacity, signal-to-noise ratio, sensitivity, spectral flexibility, and in some cases, imager response time. Recently invented is the Coded Access Optical Sensor (CAOS) Camera platform that works in unison with current Photo-Detector Array (PDA) technology to counter fundamental limitations of PDA-based imagers while providing high enough imaging spatial resolution and pixel counts. Using for example the Texas Instruments (TI) Digital Micromirror Device (DMD) to engineer the CAOS camera platform, ushered in is a paradigm change in advanced imager design, particularly for extreme dynamic range applications.

  17. Resolution Study of a Hyperspectral Sensor using Computed Tomography in the Presence of Noise

    DTIC Science & Technology

    2012-06-14

    diffraction efficiency is dependent on wavelength. Compared to techniques developed by later work, simple algebraic reconstruction techniques were used...spectral di- mension, using computed tomography (CT) techniques with only a finite number of diverse images. CTHIS require a reconstruction algorithm in...many frames are needed to reconstruct the spectral cube of a simple object using a theoretical lower bound. In this research a new algorithm is derived

  18. Long baseline planar superconducting gradiometer for biomagnetic imaging

    NASA Astrophysics Data System (ADS)

    Granata, C.; Vettoliere, A.; Nappi, C.; Lisitskiy, M.; Russo, M.

    2009-07-01

    A niobium based dc-superconducting quantum interference device (SQUID) planar gradiometer with a long baseline (50 mm) for biomagnetic applications has been developed. The pickup antenna consists of two integrated rectangular coils connected in series and magnetically coupled to a dc-SQUID in a double parallel washer configuration by two series multiturn input coils. Due to a high intrinsic responsivity, the sensors have shown at T =4.2 K a white magnetic flux noise spectral density as low as 3 μΦ0/Hz1/2. The spectral density of the magnetic field noise referred to one sensing coil, is 3.0 fT/Hz1/2 resulting in a gradient spectral noise of 0.6 fT/(cm Hz1/2). In order to verify the effectiveness of such sensors for biomagnetic applications, the magnetic response to a current dipole has been calculated and the results have been compared with those of an analogous axial gradiometer. The results show that there is no significant difference. Due to their high intrinsic balance and good performances, planar gradiometers may be the elective sensors for biomagnetic application in a soft shielded environment.

  19. Advanced synthetic image generation models and their application to multi/hyperspectral algorithm development

    NASA Astrophysics Data System (ADS)

    Schott, John R.; Brown, Scott D.; Raqueno, Rolando V.; Gross, Harry N.; Robinson, Gary

    1999-01-01

    The need for robust image data sets for algorithm development and testing has prompted the consideration of synthetic imagery as a supplement to real imagery. The unique ability of synthetic image generation (SIG) tools to supply per-pixel truth allows algorithm writers to test difficult scenarios that would require expensive collection and instrumentation efforts. In addition, SIG data products can supply the user with `actual' truth measurements of the entire image area that are not subject to measurement error thereby allowing the user to more accurately evaluate the performance of their algorithm. Advanced algorithms place a high demand on synthetic imagery to reproduce both the spectro-radiometric and spatial character observed in real imagery. This paper describes a synthetic image generation model that strives to include the radiometric processes that affect spectral image formation and capture. In particular, it addresses recent advances in SIG modeling that attempt to capture the spatial/spectral correlation inherent in real images. The model is capable of simultaneously generating imagery from a wide range of sensors allowing it to generate daylight, low-light-level and thermal image inputs for broadband, multi- and hyper-spectral exploitation algorithms.

  20. Evaluation of sensor, environment and operational factors impacting the use of multiple sensor constellations for long term resource monitoring

    NASA Astrophysics Data System (ADS)

    Rengarajan, Rajagopalan

    Moderate resolution remote sensing data offers the potential to monitor the long and short term trends in the condition of the Earth's resources at finer spatial scales and over longer time periods. While improved calibration (radiometric and geometric), free access (Landsat, Sentinel, CBERS), and higher level products in reflectance units have made it easier for the science community to derive the biophysical parameters from these remotely sensed data, a number of issues still affect the analysis of multi-temporal datasets. These are primarily due to sources that are inherent in the process of imaging from single or multiple sensors. Some of these undesired or uncompensated sources of variation include variation in the view angles, illumination angles, atmospheric effects, and sensor effects such as Relative Spectral Response (RSR) variation between different sensors. The complex interaction of these sources of variation would make their study extremely difficult if not impossible with real data, and therefore, a simulated analysis approach is used in this study. A synthetic forest canopy is produced using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model and its measured BRDFs are modeled using the RossLi canopy BRDF model. The simulated BRDF matches the real data to within 2% of the reflectance in the red and the NIR spectral bands studied. The BRDF modeling process is extended to model and characterize the defoliation of a forest, which is used in factor sensitivity studies to estimate the effect of each factor for varying environment and sensor conditions. Finally, a factorial experiment is designed to understand the significance of the sources of variation, and regression based analysis are performed to understand the relative importance of the factors. The design of experiment and the sensitivity analysis conclude that the atmospheric attenuation and variations due to the illumination angles are the dominant sources impacting the at-sensor radiance.

  1. Processing and analysis of commercial satellite image data of the nuclear accident near Chernobyl, U.S.S.R.

    USGS Publications Warehouse

    Sadowski, Franklin G.; Covington, Steven J.

    1987-01-01

    Advanced digital processing techniques were applied to Landsat-5 Thematic Mapper (TM) data and SPOT highresolution visible (HRV) panchromatic data to maximize the utility of images of a nuclear powerplant emergency at Chernobyl in the Soviet Ukraine. The images demonstrate the unique interpretive capabilities provided by the numerous spectral bands of the Thematic Mapper and the high spatial resolution of the SPOT HRV sensor.

  2. Athena Microscopic Imager investigation

    NASA Astrophysics Data System (ADS)

    Herkenhoff, K. E.; Squyres, S. W.; Bell, J. F.; Maki, J. N.; Arneson, H. M.; Bertelsen, P.; Brown, D. I.; Collins, S. A.; Dingizian, A.; Elliott, S. T.; Goetz, W.; Hagerott, E. C.; Hayes, A. G.; Johnson, M. J.; Kirk, R. L.; McLennan, S.; Morris, R. V.; Scherr, L. M.; Schwochert, M. A.; Shiraishi, L. R.; Smith, G. H.; Soderblom, L. A.; Sohl-Dickstein, J. N.; Wadsworth, M. V.

    2003-11-01

    The Athena science payload on the Mars Exploration Rovers (MER) includes the Microscopic Imager (MI). The MI is a fixed-focus camera mounted on the end of an extendable instrument arm, the Instrument Deployment Device (IDD). The MI was designed to acquire images at a spatial resolution of 30 microns/pixel over a broad spectral range (400-700 nm). The MI uses the same electronics design as the other MER cameras but has optics that yield a field of view of 31 × 31 mm across a 1024 × 1024 pixel CCD image. The MI acquires images using only solar or skylight illumination of the target surface. A contact sensor is used to place the MI slightly closer to the target surface than its best focus distance (about 66 mm), allowing concave surfaces to be imaged in good focus. Coarse focusing (~2 mm precision) is achieved by moving the IDD away from a rock target after the contact sensor has been activated. The MI optics are protected from the Martian environment by a retractable dust cover. The dust cover includes a Kapton window that is tinted orange to restrict the spectral bandpass to 500-700 nm, allowing color information to be obtained by taking images with the dust cover open and closed. MI data will be used to place other MER instrument data in context and to aid in petrologic and geologic interpretations of rocks and soils on Mars.

  3. Classification of Liss IV Imagery Using Decision Tree Methods

    NASA Astrophysics Data System (ADS)

    Verma, Amit Kumar; Garg, P. K.; Prasad, K. S. Hari; Dadhwal, V. K.

    2016-06-01

    Image classification is a compulsory step in any remote sensing research. Classification uses the spectral information represented by the digital numbers in one or more spectral bands and attempts to classify each individual pixel based on this spectral information. Crop classification is the main concern of remote sensing applications for developing sustainable agriculture system. Vegetation indices computed from satellite images gives a good indication of the presence of vegetation. It is an indicator that describes the greenness, density and health of vegetation. Texture is also an important characteristics which is used to identifying objects or region of interest is an image. This paper illustrate the use of decision tree method to classify the land in to crop land and non-crop land and to classify different crops. In this paper we evaluate the possibility of crop classification using an integrated approach methods based on texture property with different vegetation indices for single date LISS IV sensor 5.8 meter high spatial resolution data. Eleven vegetation indices (NDVI, DVI, GEMI, GNDVI, MSAVI2, NDWI, NG, NR, NNIR, OSAVI and VI green) has been generated using green, red and NIR band and then image is classified using decision tree method. The other approach is used integration of texture feature (mean, variance, kurtosis and skewness) with these vegetation indices. A comparison has been done between these two methods. The results indicate that inclusion of textural feature with vegetation indices can be effectively implemented to produce classifiedmaps with 8.33% higher accuracy for Indian satellite IRS-P6, LISS IV sensor images.

  4. CLARREO Approach for Reference Intercalibration of Reflected Solar Sensors: On-Orbit Data Matching and Sampling

    NASA Technical Reports Server (NTRS)

    Roithmayr, Carlos; Lukashin, Constantine; Speth, Paul W.; Kopp, Gregg; Thome, Kurt; Wielicki, Bruce A.; Young, David F.

    2014-01-01

    The implementation of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission was recommended by the National Research Council in 2007 to provide an on-orbit intercalibration standard with accuracy of 0.3% (k = 2) for relevant Earth observing sensors. The goal of reference intercalibration, as established in the Decadal Survey, is to enable rigorous high-accuracy observations of critical climate change parameters, including reflected broadband radiation [Clouds and Earth's Radiant Energy System (CERES)], cloud properties [Visible Infrared Imaging Radiometer Suite (VIIRS)], and changes in surface albedo, including snow and ice albedo feedback. In this paper, we describe the CLARREO approach for performing intercalibration on orbit in the reflected solar (RS) wavelength domain. It is based on providing highly accurate spectral reflectance and reflected radiance measurements from the CLARREO Reflected Solar Spectrometer (RSS) to establish an on-orbit reference for existing sensors, namely, CERES and VIIRS on Joint Polar Satellite System satellites, Advanced Very High Resolution Radiometer and follow-on imagers on MetOp, Landsat imagers, and imagers on geostationary platforms. One of two fundamental CLARREO mission goals is to provide sufficient sampling of high-accuracy observations that are matched in time, space, and viewing angles with measurements made by existing instruments, to a degree that overcomes the random error sources from imperfect data matching and instrument noise. The data matching is achieved through CLARREO RSS pointing operations on orbit that align its line of sight with the intercalibrated sensor. These operations must be planned in advance; therefore, intercalibration events must be predicted by orbital modeling. If two competing opportunities are identified, one target sensor must be given priority over the other. The intercalibration method is to monitor changes in targeted sensor response function parameters: effective offset, gain, nonlinearity, optics spectral response, and sensitivity to polarization. In this paper, we use existing satellite data and orbital simulationmethods to determinemission requirements for CLARREO, its instrument pointing ability, methodology, and needed intercalibration sampling and data matching for accurate intercalibration of RS radiation sensors on orbit.

  5. Imaging Beyond What Man Can See

    NASA Technical Reports Server (NTRS)

    May, George; Mitchell, Brian

    2004-01-01

    Three lightweight, portable hyperspectral sensor systems have been built that capture energy from 200 to 1700 nanometers (ultravio1et to shortwave infrared). The sensors incorporate a line scanning technique that requires no relative movement between the target and the sensor. This unique capability, combined with portability, opens up new uses of hyperspectral imaging for laboratory and field environments. Each system has a GUI-based software package that allows the user to communicate with the imaging device for setting spatial resolution, spectral bands and other parameters. NASA's Space Partnership Development has sponsored these innovative developments and their application to human problems on Earth and in space. Hyperspectral datasets have been captured and analyzed in numerous areas including precision agriculture, food safety, biomedical imaging, and forensics. Discussion on research results will include realtime detection of food contaminants, molds and toxin research on corn, identifying counterfeit documents, non-invasive wound monitoring and aircraft applications. Future research will include development of a thermal infrared hyperspectral sensor that will support natural resource applications on Earth and thermal analyses during long duration space flight. This paper incorporates a variety of disciplines and imaging technologies that have been linked together to allow the expansion of remote sensing across both traditional and non-traditional boundaries.

  6. Analysis of airborne imaging spectrometer data for the Ruby Mountains, Montana, by use of absorption-band-depth images

    NASA Technical Reports Server (NTRS)

    Brickey, David W.; Crowley, James K.; Rowan, Lawrence C.

    1987-01-01

    Airborne Imaging Spectrometer-1 (AIS-1) data were obtained for an area of amphibolite grade metamorphic rocks that have moderate rangeland vegetation cover. Although rock exposures are sparse and patchy at this site, soils are visible through the vegetation and typically comprise 20 to 30 percent of the surface area. Channel averaged low band depth images for diagnostic soil rock absorption bands. Sets of three such images were combined to produce color composite band depth images. This relative simple approach did not require extensive calibration efforts and was effective for discerning a number of spectrally distinctive rocks and soils, including soils having high talc concentrations. The results show that the high spectral and spatial resolution of AIS-1 and future sensors hold considerable promise for mapping mineral variations in soil, even in moderately vegetated areas.

  7. Multisensor data fusion across time and space

    NASA Astrophysics Data System (ADS)

    Villeneuve, Pierre V.; Beaven, Scott G.; Reed, Robert A.

    2014-06-01

    Field measurement campaigns typically deploy numerous sensors having different sampling characteristics for spatial, temporal, and spectral domains. Data analysis and exploitation is made more difficult and time consuming as the sample data grids between sensors do not align. This report summarizes our recent effort to demonstrate feasibility of a processing chain capable of "fusing" image data from multiple independent and asynchronous sensors into a form amenable to analysis and exploitation using commercially-available tools. Two important technical issues were addressed in this work: 1) Image spatial registration onto a common pixel grid, 2) Image temporal interpolation onto a common time base. The first step leverages existing image matching and registration algorithms. The second step relies upon a new and innovative use of optical flow algorithms to perform accurate temporal upsampling of slower frame rate imagery. Optical flow field vectors were first derived from high-frame rate, high-resolution imagery, and then finally used as a basis for temporal upsampling of the slower frame rate sensor's imagery. Optical flow field values are computed using a multi-scale image pyramid, thus allowing for more extreme object motion. This involves preprocessing imagery to varying resolution scales and initializing new vector flow estimates using that from the previous coarser-resolution image. Overall performance of this processing chain is demonstrated using sample data involving complex too motion observed by multiple sensors mounted to the same base. Multiple sensors were included, including a high-speed visible camera, up to a coarser resolution LWIR camera.

  8. pHlash: A New Genetically Encoded and Ratiometric Luminescence Sensor of Intracellular pH

    PubMed Central

    Robertson, J. Brian; Johnson, Carl Hirschie

    2012-01-01

    We report the development of a genetically encodable and ratiometic pH probe named “pHlash” that utilizes Bioluminescence Resonance Energy Transfer (BRET) rather than fluorescence excitation. The pHlash sensor–composed of a donor luciferase that is genetically fused to a Venus fluorophore–exhibits pH dependence of its spectral emission in vitro. When expressed in either yeast or mammalian cells, pHlash reports basal pH and cytosolic acidification in vivo. Its spectral ratio response is H+ specific; neither Ca++, Mg++, Na+, nor K+ changes the spectral form of its luminescence emission. Moreover, it can be used to image pH in single cells. This is the first BRET-based sensor of H+ ions, and it should allow the approximation of pH in cytosolic and organellar compartments in applications where current pH probes are inadequate. PMID:22905204

  9. Convolutional Sparse Coding for RGB+NIR Imaging.

    PubMed

    Hu, Xuemei; Heide, Felix; Dai, Qionghai; Wetzstein, Gordon

    2018-04-01

    Emerging sensor designs increasingly rely on novel color filter arrays (CFAs) to sample the incident spectrum in unconventional ways. In particular, capturing a near-infrared (NIR) channel along with conventional RGB color is an exciting new imaging modality. RGB+NIR sensing has broad applications in computational photography, such as low-light denoising, it has applications in computer vision, such as facial recognition and tracking, and it paves the way toward low-cost single-sensor RGB and depth imaging using structured illumination. However, cost-effective commercial CFAs suffer from severe spectral cross talk. This cross talk represents a major challenge in high-quality RGB+NIR imaging, rendering existing spatially multiplexed sensor designs impractical. In this work, we introduce a new approach to RGB+NIR image reconstruction using learned convolutional sparse priors. We demonstrate high-quality color and NIR imaging for challenging scenes, even including high-frequency structured NIR illumination. The effectiveness of the proposed method is validated on a large data set of experimental captures, and simulated benchmark results which demonstrate that this work achieves unprecedented reconstruction quality.

  10. Calibration and Validation of the National Ecological Observatory Network's Airborne Imaging Spectrometers

    NASA Astrophysics Data System (ADS)

    Leisso, N.

    2015-12-01

    The National Ecological Observatory Network (NEON) is being constructed by the National Science Foundation and is slated for completion in 2017. NEON is designed to collect data to improve the understanding of changes in observed ecosystems. The observatory will produce data products on a variety of spatial and temporal scales collected from individual sites strategically located across the U.S. including Alaska, Hawaii, and Puerto Rico. Data sources include standardized terrestrial, instrumental, and aquatic observation systems in addition to three airborne remote sensing observation systems installed into leased Twin Otter aircraft. The Airborne Observation Platforms (AOP) are designed to collect 3-band aerial imagery, waveform and discrete LiDAR, and high-fidelity imaging spectroscopy data over the NEON sites annually at or near peak-greenness. The NEON Imaging Spectrometer (NIS) is a Visible and Shortwave Infrared (VSWIR) sensor designed by NASA JPL for ecological applications. Spectroscopic data is collected at 5-nm intervals across the solar-reflective spectral region (380-nm to 2500-nm) in a 34-degree FOV swath. A key uncertainty driver to the derived remote sensing NEON data products is the calibration of the imaging spectrometers. In addition, the calibration and accuracy of the higher-level data product algorithms is essential to the overall NEON mission to detect changes in the collected ecosystems over the 30-year expected lifetime. The typical calibration workflow of the NIS consists of the characterizing the focal plane, spectral calibration, and radiometric calibration. Laboratory spectral calibration is based on well-defined emission lines in conjunction with a scanning monochromator to define the individual spectral response functions. The radiometric calibration is NIST traceable and transferred to the NIS with an integrating sphere calibrated through the use of transfer radiometers. The laboratory calibration is monitored and maintained through the use of an On-Board Calibration (OBC) system. Recent advances in the understanding of the NIS sensor that have led to improvements in the overall calibration accuracy are reported. In addition, the NIS calibration and data products are compared to Earth-observing satellite sensors.

  11. Characterization and initial field test of a long wave thermal infrared hyperspectral imager for measuring SO2 in volcanic plumes

    NASA Astrophysics Data System (ADS)

    Gabrieli, A.; Wright, R.; Porter, J. N.; Lucey, P. G.; Crites, S.; Garbeil, H.; Pilger, E. J.; Wood, M.

    2015-12-01

    The ability to quantify volcanic SO2 and image the spatial distribution in plumes either by day or by night would be beneficial to volcanologists. In this project, a newly developed remote sensing long-wave thermal infrared imaging hyperspectral sensor, was tested. The system employs a Sagnac interferometer and an uncooled microbolometer in rapid scanning configuration. This instrument is able to collect hyperspectral images of the scene between 8 and 14 and for each pixel a spectrum containing 50 samples can be retrieved. Images are spectrally and radiometrically calibrated using an IR source with a narrow band filter and two black bodies. The sensitivity of the system was studied by using a gas cell containing various known concentrations of SO2, which are representative of those found in volcanic plumes. Measured spectra were compared with theoretical spectra obtained from MODTRAN5 with the same viewing geometry and spectral resolution as the sensor. The MODTRAN5 calculations were carried out using a radiative transfer algorithm which accounts for the transmission and emission both inside and outside of the gas cell. These preliminary results and field measurements at Kīlauea volcano, Hawai'i will be discussed demonstrating the performance of the system and the ability of retrieving SO2 plume concentrations.

  12. Exploiting Satellite Focal Plane Geometry for Automatic Extraction of Traffic Flow from Single Optical Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Krauß, T.

    2014-11-01

    The focal plane assembly of most pushbroom scanner satellites is built up in a way that different multispectral or multispectral and panchromatic bands are not all acquired exactly at the same time. This effect is due to offsets of some millimeters of the CCD-lines in the focal plane. Exploiting this special configuration allows the detection of objects moving during this small time span. In this paper we present a method for automatic detection and extraction of moving objects - mainly traffic - from single very high resolution optical satellite imagery of different sensors. The sensors investigated are WorldView-2, RapidEye, Pléiades and also the new SkyBox satellites. Different sensors require different approaches for detecting moving objects. Since the objects are mapped on different positions only in different spectral bands also the change of spectral properties have to be taken into account. In case the main distance in the focal plane is between the multispectral and the panchromatic CCD-line like for Pléiades an approach for weighted integration to receive mostly identical images is investigated. Other approaches for RapidEye and WorldView-2 are also shown. From these intermediate bands difference images are calculated and a method for detecting the moving objects from these difference images is proposed. Based on these presented methods images from different sensors are processed and the results are assessed for detection quality - how many moving objects can be detected, how many are missed - and accuracy - how accurate is the derived speed and size of the objects. Finally the results are discussed and an outlook for possible improvements towards operational processing is presented.

  13. Software defined multi-spectral imaging for Arctic sensor networks

    NASA Astrophysics Data System (ADS)

    Siewert, Sam; Angoth, Vivek; Krishnamurthy, Ramnarayan; Mani, Karthikeyan; Mock, Kenrick; Singh, Surjith B.; Srivistava, Saurav; Wagner, Chris; Claus, Ryan; Vis, Matthew Demi

    2016-05-01

    Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop-in-place installations in the Arctic. The prototype selected will be field tested in Alaska in the summer of 2016.

  14. Characteristics of active spectral sensor for plant sensing

    USDA-ARS?s Scientific Manuscript database

    Plant stress has been estimated by spectral signature using both passive and active sensors. As optical sensors measure reflected light from a target, changes in illumination conditions critically affect sensor response. Active spectral sensors minimize the illumination effects by producing their ...

  15. The use of the Sonoran Desert as a pseudo-invariant site for optical sensor cross-calibration and long-term stability monitoring

    USGS Publications Warehouse

    Angal, A.; Chander, Gyanesh; Choi, Taeyoung; Wu, Aisheng; Xiong, Xiaoxiong

    2010-01-01

    The Sonoran Desert is a large, flat, pseudo-invariant site near the United States-Mexico border. It is one of the largest and hottest deserts in North America, with an area of 311,000 square km. This site is particularly suitable for calibration purposes because of its high spatial and spectral uniformity and reasonable temporal stability. This study uses measurements from four different sensors, Terra Moderate Resolution Imaging Spectroradiometer (MODIS), Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+), Aqua MODIS, and Landsat 5 (L5) Thematic Mapper (TM), to assess the suitability of this site for long-term stability monitoring and to evaluate the “radiometric calibration differences” between spectrally matching bands of all four sensors. In general, the drift in the top-of-atmosphere (TOA) reflectance of each sensor over a span of nine years is within the specified calibration uncertainties. Monthly precipitation measurements of the Sonoran Desert region were obtained from the Global Historical Climatology Network (GHCN), and their effects on the retrieved TOA reflectances were evaluated. To account for the combined uncertainties in the TOA reflectance due to the surface and atmospheric Bi-directional Reflectance Distribution Function (BRDF), a semi-empirical BRDF model has been adopted to monitor and reduce the impact of illumination geometry differences on the retrieved TOA reflectances. To evaluate calibration differences between the MODIS and Landsat sensors, correction for spectral response differences using a hyperspectral sensor is also demonstrated.

  16. A Method for Application of Classification Tree Models to Map Aquatic Vegetation Using Remotely Sensed Images from Different Sensors and Dates

    PubMed Central

    Jiang, Hao; Zhao, Dehua; Cai, Ying; An, Shuqing

    2012-01-01

    In previous attempts to identify aquatic vegetation from remotely-sensed images using classification trees (CT), the images used to apply CT models to different times or locations necessarily originated from the same satellite sensor as that from which the original images used in model development came, greatly limiting the application of CT. We have developed an effective normalization method to improve the robustness of CT models when applied to images originating from different sensors and dates. A total of 965 ground-truth samples of aquatic vegetation types were obtained in 2009 and 2010 in Taihu Lake, China. Using relevant spectral indices (SI) as classifiers, we manually developed a stable CT model structure and then applied a standard CT algorithm to obtain quantitative (optimal) thresholds from 2009 ground-truth data and images from Landsat7-ETM+, HJ-1B-CCD, Landsat5-TM and ALOS-AVNIR-2 sensors. Optimal CT thresholds produced average classification accuracies of 78.1%, 84.7% and 74.0% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. However, the optimal CT thresholds for different sensor images differed from each other, with an average relative variation (RV) of 6.40%. We developed and evaluated three new approaches to normalizing the images. The best-performing method (Method of 0.1% index scaling) normalized the SI images using tailored percentages of extreme pixel values. Using the images normalized by Method of 0.1% index scaling, CT models for a particular sensor in which thresholds were replaced by those from the models developed for images originating from other sensors provided average classification accuracies of 76.0%, 82.8% and 68.9% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. Applying the CT models developed for normalized 2009 images to 2010 images resulted in high classification (78.0%–93.3%) and overall (92.0%–93.1%) accuracies. Our results suggest that Method of 0.1% index scaling provides a feasible way to apply CT models directly to images from sensors or time periods that differ from those of the images used to develop the original models.

  17. A sampling procedure to guide the collection of narrow-band, high-resolution spatially and spectrally representative reflectance data. [satellite imagery of earth resources

    NASA Technical Reports Server (NTRS)

    Brand, R. R.; Barker, J. L.

    1983-01-01

    A multistage sampling procedure using image processing, geographical information systems, and analytical photogrammetry is presented which can be used to guide the collection of representative, high-resolution spectra and discrete reflectance targets for future satellite sensors. The procedure is general and can be adapted to characterize areas as small as minor watersheds and as large as multistate regions. Beginning with a user-determined study area, successive reductions in size and spectral variation are performed using image analysis techniques on data from the Multispectral Scanner, orbital and simulated Thematic Mapper, low altitude photography synchronized with the simulator, and associated digital data. An integrated image-based geographical information system supports processing requirements.

  18. Spectral-Spatial Classification of Hyperspectral Images Using Hierarchical Optimization

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2011-01-01

    A new spectral-spatial method for hyperspectral data classification is proposed. For a given hyperspectral image, probabilistic pixelwise classification is first applied. Then, hierarchical step-wise optimization algorithm is performed, by iteratively merging neighboring regions with the smallest Dissimilarity Criterion (DC) and recomputing class labels for new regions. The DC is computed by comparing region mean vectors, class labels and a number of pixels in the two regions under consideration. The algorithm is converged when all the pixels get involved in the region merging procedure. Experimental results are presented on two remote sensing hyperspectral images acquired by the AVIRIS and ROSIS sensors. The proposed approach improves classification accuracies and provides maps with more homogeneous regions, when compared to previously proposed classification techniques.

  19. Spectral Band Selection for Urban Material Classification Using Hyperspectral Libraries

    NASA Astrophysics Data System (ADS)

    Le Bris, A.; Chehata, N.; Briottet, X.; Paparoditis, N.

    2016-06-01

    In urban areas, information concerning very high resolution land cover and especially material maps are necessary for several city modelling or monitoring applications. That is to say, knowledge concerning the roofing materials or the different kinds of ground areas is required. Airborne remote sensing techniques appear to be convenient for providing such information at a large scale. However, results obtained using most traditional processing methods based on usual red-green-blue-near infrared multispectral images remain limited for such applications. A possible way to improve classification results is to enhance the imagery spectral resolution using superspectral or hyperspectral sensors. In this study, it is intended to design a superspectral sensor dedicated to urban materials classification and this work particularly focused on the selection of the optimal spectral band subsets for such sensor. First, reflectance spectral signatures of urban materials were collected from 7 spectral libraires. Then, spectral optimization was performed using this data set. The band selection workflow included two steps, optimising first the number of spectral bands using an incremental method and then examining several possible optimised band subsets using a stochastic algorithm. The same wrapper relevance criterion relying on a confidence measure of Random Forests classifier was used at both steps. To cope with the limited number of available spectra for several classes, additional synthetic spectra were generated from the collection of reference spectra: intra-class variability was simulated by multiplying reference spectra by a random coefficient. At the end, selected band subsets were evaluated considering the classification quality reached using a rbf svm classifier. It was confirmed that a limited band subset was sufficient to classify common urban materials. The important contribution of bands from the Short Wave Infra-Red (SWIR) spectral domain (1000-2400 nm) to material classification was also shown.

  20. Quadrilinear CCD sensors for the multispectral channel of spaceborne imagers

    NASA Astrophysics Data System (ADS)

    Materne, Alex; Gili, Bruno; Laubier, David; Gimenez, Thierry

    2001-12-01

    The PLEIADES-HR Earth Observation satellites will combine a high resolution panchromatic channel -- 0.7 m at nadir -- and a multispectral channel allowing a 2.8 m resolution. This paper presents the main specifications, design and performances of a 52 microns pitch quadrilinear CCD sensor developed by ATMEL under CNES contract, for the multispectral channel of the PLEIADES-HR instrument. The monolithic CCD device includes four lines of 1500 pixels, each line dedicated to a narrow spectral band within blue to near infra red spectrum. The design of the photodiodes and CCD registers, with larger size than those developed up to now for CNES spaceborne imagers, needed some specific structures to break the large equipotential areas where charge do not flow properly. Results are presented on the options which were experimented to improve sensitivity, maintain transfer efficiency and reduce power dissipation. The four spectral bands are achieved by four stripe filters made by SAGEM-REOSC PRODUCTS on a glass substrate, to be assembled on the sensor window. Line to line spacing on the silicon die takes into account the results of straylight analysis. A mineral layer, with high optical absorption performances is deposited between photosensitive lines to further reduce straylight.

  1. Spectral characterisation and noise performance of Vanilla—an active pixel sensor

    NASA Astrophysics Data System (ADS)

    Blue, Andrew; Bates, R.; Bohndiek, S. E.; Clark, A.; Arvanitis, Costas D.; Greenshaw, T.; Laing, A.; Maneuski, D.; Turchetta, R.; O'Shea, V.

    2008-06-01

    This work will report on the characterisation of a new active pixel sensor, Vanilla. The Vanilla comprises of 512×512 (25μm 2) pixels. The sensor has a 12 bit digital output for full-frame mode, although it can also be readout in analogue mode, whereby it can also be read in a fully programmable region-of-interest (ROI) mode. In full frame, the sensor can operate at a readout rate of more than 100 frames per second (fps), while in ROI mode, the speed depends on the size, shape and number of ROIs. For example, an ROI of 6×6 pixels can be read at 20,000 fps in analogue mode. Using photon transfer curve (PTC) measurements allowed for the calculation of the read noise, shot noise, full-well capacity and camera gain constant of the sensor. Spectral response measurements detailed the quantum efficiency (QE) of the detector through the UV and visible region. Analysis of the ROI readout mode was also performed. Such measurements suggest that the Vanilla APS (active pixel sensor) will be suitable for a wide range of applications including particle physics and medical imaging.

  2. Spectral Band Characterization for Hyperspectral Monitoring of Water Quality

    NASA Technical Reports Server (NTRS)

    Vermillion, Stephanie C.; Raqueno, Rolando; Simmons, Rulon

    2001-01-01

    A method for selecting the set of spectral characteristics that provides the smallest increase in prediction error is of interest to those using hyperspectral imaging (HSI) to monitor water quality. The spectral characteristics of interest to these applications are spectral bandwidth and location. Three water quality constituents of interest that are detectable via remote sensing are chlorophyll (CHL), total suspended solids (TSS), and colored dissolved organic matter (CDOM). Hyperspectral data provides a rich source of information regarding the content and composition of these materials, but often provides more data than an analyst can manage. This study addresses the spectral characteristics need for water quality monitoring for two reasons. First, determination of the greatest contribution of these spectral characteristics would greatly improve computational ease and efficiency. Second, understanding the spectral capabilities of different spectral resolutions and specific regions is an essential part of future system development and characterization. As new systems are developed and tested, water quality managers will be asked to determine sensor specifications that provide the most accurate and efficient water quality measurements. We address these issues using data from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and a set of models to predict constituent concentrations.

  3. Analysis of the Advantages and Limitations of Stationary Imaging Fourier Transform Spectrometer. Revised

    NASA Technical Reports Server (NTRS)

    Beecken, Brian P.; Kleinman, Randall R.

    2004-01-01

    New developments in infrared sensor technology have potentially made possible a new space-based system which can measure far-infrared radiation at lower costs (mass, power and expense). The Stationary Imaging Fourier Transform Spectrometer (SIFTS) proposed by NASA Langley Research Center, makes use of new detector array technology. A mathematical model which simulates resolution and spectral range relationships has been developed for analyzing the utility of such a radically new approach to spectroscopy. Calculations with this forward model emulate the effects of a detector array on the ability to retrieve accurate spectral features. Initial computations indicate significant attenuation at high wavenumbers.

  4. Optical and electrical characterization of a back-thinned CMOS active pixel sensor

    NASA Astrophysics Data System (ADS)

    Blue, Andrew; Clark, A.; Houston, S.; Laing, A.; Maneuski, D.; Prydderch, M.; Turchetta, R.; O'Shea, V.

    2009-06-01

    This work will report on the first work on the characterization of a back-thinned Vanilla-a 512×512 (25 μm squared) active pixel sensor (APS). Characterization of the detectors was carried out through the analysis of photon transfer curves to yield a measurement of full well capacity, noise levels, gain constants and linearity. Spectral characterization of the sensors was also performed in the visible and UV regions. A full comparison against non-back-thinned front illuminated Vanilla sensors is included. Such measurements suggest that the Vanilla APS will be suitable for a wide range of applications, including particle physics and biomedical imaging.

  5. Multisensor Analysis of Spectral Dimensionality and Soil Diversity in the Great Central Valley of California.

    PubMed

    Sousa, Daniel; Small, Christopher

    2018-02-14

    Planned hyperspectral satellite missions and the decreased revisit time of multispectral imaging offer the potential for data fusion to leverage both the spectral resolution of hyperspectral sensors and the temporal resolution of multispectral constellations. Hyperspectral imagery can also be used to better understand fundamental properties of multispectral data. In this analysis, we use five flight lines from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) archive with coincident Landsat 8 acquisitions over a spectrally diverse region of California to address the following questions: (1) How much of the spectral dimensionality of hyperspectral data is captured in multispectral data?; (2) Is the characteristic pyramidal structure of the multispectral feature space also present in the low order dimensions of the hyperspectral feature space at comparable spatial scales?; (3) How much variability in rock and soil substrate endmembers (EMs) present in hyperspectral data is captured by multispectral sensors? We find nearly identical partitions of variance, low-order feature space topologies, and EM spectra for hyperspectral and multispectral image composites. The resulting feature spaces and EMs are also very similar to those from previous global multispectral analyses, implying that the fundamental structure of the global feature space is present in our relatively small spatial subset of California. Finally, we find that the multispectral dataset well represents the substrate EM variability present in the study area - despite its inability to resolve narrow band absorptions. We observe a tentative but consistent physical relationship between the gradation of substrate reflectance in the feature space and the gradation of sand versus clay content in the soil classification system.

  6. Multisensor Analysis of Spectral Dimensionality and Soil Diversity in the Great Central Valley of California

    PubMed Central

    Small, Christopher

    2018-01-01

    Planned hyperspectral satellite missions and the decreased revisit time of multispectral imaging offer the potential for data fusion to leverage both the spectral resolution of hyperspectral sensors and the temporal resolution of multispectral constellations. Hyperspectral imagery can also be used to better understand fundamental properties of multispectral data. In this analysis, we use five flight lines from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) archive with coincident Landsat 8 acquisitions over a spectrally diverse region of California to address the following questions: (1) How much of the spectral dimensionality of hyperspectral data is captured in multispectral data?; (2) Is the characteristic pyramidal structure of the multispectral feature space also present in the low order dimensions of the hyperspectral feature space at comparable spatial scales?; (3) How much variability in rock and soil substrate endmembers (EMs) present in hyperspectral data is captured by multispectral sensors? We find nearly identical partitions of variance, low-order feature space topologies, and EM spectra for hyperspectral and multispectral image composites. The resulting feature spaces and EMs are also very similar to those from previous global multispectral analyses, implying that the fundamental structure of the global feature space is present in our relatively small spatial subset of California. Finally, we find that the multispectral dataset well represents the substrate EM variability present in the study area – despite its inability to resolve narrow band absorptions. We observe a tentative but consistent physical relationship between the gradation of substrate reflectance in the feature space and the gradation of sand versus clay content in the soil classification system. PMID:29443900

  7. Extraction of incident irradiance from LWIR hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Lahaie, Pierre

    2014-10-01

    The atmospheric correction of thermal hyperspectral imagery can be separated in two distinct processes: Atmospheric Compensation (AC) and Temperature and Emissivity separation (TES). TES requires for input at each pixel, the ground leaving radiance and the atmospheric downwelling irradiance, which are the outputs of the AC process. The extraction from imagery of the downwelling irradiance requires assumptions about some of the pixels' nature, the sensor and the atmosphere. Another difficulty is that, often the sensor's spectral response is not well characterized. To deal with this unknown, we defined a spectral mean operator that is used to filter the ground leaving radiance and a computation of the downwelling irradiance from MODTRAN. A user will select a number of pixels in the image for which the emissivity is assumed to be known. The emissivity of these pixels is assumed to be smooth and that the only spectrally fast varying variable in the downwelling irradiance. Using these assumptions we built an algorithm to estimate the downwelling irradiance. The algorithm is used on all the selected pixels. The estimated irradiance is the average on the spectral channels of the resulting computation. The algorithm performs well in simulation and results are shown for errors in the assumed emissivity and for errors in the atmospheric profiles. The sensor noise influences mainly the required number of pixels.

  8. Integration of airborne optical and thermal imagery for archaeological subsurface structures detection: the Arpi case study (Italy)

    NASA Astrophysics Data System (ADS)

    Bassani, C.; Cavalli, R. M.; Fasulli, L.; Palombo, A.; Pascucci, S.; Santini, F.; Pignatti, S.

    2009-04-01

    The application of Remote Sensing data for detecting subsurface structures is becoming a remarkable tool for the archaeological observations to be combined with the near surface geophysics [1, 2]. As matter of fact, different satellite and airborne sensors have been used for archaeological applications, such as the identification of spectral anomalies (i.e. marks) related to the buried remnants within archaeological sites, and the management and protection of archaeological sites [3, 5]. The dominant factors that affect the spectral detectability of marks related to manmade archaeological structures are: (1) the spectral contrast between the target and background materials, (2) the proportion of the target on the surface (relative to the background), (3) the imaging system characteristics being used (i.e. bands, instrument noise and pixel size), and (4) the conditions under which the surface is being imaged (i.e. illumination and atmospheric conditions) [4]. In this context, just few airborne hyperspectral sensors were applied for cultural heritage studies, among them the AVIRIS (Airborne Visible/Infrared Imaging Spectrometer), the CASI (Compact Airborne Spectrographic Imager), the HyMAP (Hyperspectral MAPping) and the MIVIS (Multispectral Infrared and Visible Imaging Spectrometer). Therefore, the application of high spatial/spectral resolution imagery arise the question on which is the trade off between high spectral and spatial resolution imagery for archaeological applications and which spectral region is optimal for the detection of subsurface structures. This paper points out the most suitable spectral information useful to evaluate the image capability in terms of spectral anomaly detection of subsurface archaeological structures in different land cover contexts. In this study, we assess the capability of MIVIS and CASI reflectances and of ATM and MIVIS emissivities (Table 1) for subsurface archaeological prospection in different sites of the Arpi archaeological area (southern Italy). We identify, for the selected sites, three main land cover overlying the buried structures: (a) photosynthetic (i.e. green low vegetation), (b) non-photosynthetic vegetation (i.e. yellow, dry low vegetation), and (c) dry bare soil. Afterwards, we analyse the spectral regions showing an inherent potential for the archaeological detection as a function of the land cover characteristics. The classified land cover units have been used in a spectral mixture analysis to assess the land cover fractional abundance surfacing the buried structures (i.e. mark-background system). The classification and unmixing results for the CASI, MIVIS and ATM remote sensing data processing showed a good accordance both in the land cover units and in the subsurface structures identification. The integrated analysis of the unmixing results for the three sensors allowed us to establish that for the land cover characterized by green and dry vegetation (occurrence higher than 75%), the visible and near infrared (VNIR) spectral regions better enhance the buried man-made structures. In particular, if the structures are covered by more than 75% of vegetation the two most promising wavelengths for their detection are the chlorophyll peak at 0.56 m (Visible region) and the red edge region (0.67 to 0.72 m; NIR region). This result confirms that the variation induced by the subsurface structures (e.g., stone walls, tile concentrations, pavements near the surface, road networks) to the natural vegetation growth and/or colour (i.e., for different stress factors) is primarily detectable by the chlorophyll peak and the red edge region applied for the vegetation stress detection. Whereas, if dry soils cover the structures (occurrence higher than 75%), both the VNIR and thermal infrared (TIR) regions are suitable to detect the subsurface structures. This work demonstrates that airborne reflectances and emissivities data, even though at different spatial/spectral resolutions and acquisition time represent an effective and rapid tool to detect subsurface structures within different land cover contexts. As concluding results, this study reveals that the airborne multi/hyperspectral image processing can be an effective and cost-efficient tool to perform a preliminary analysis of those areas where large cultural heritage assets prioritising and localizing the sites where to apply near surface geophysics surveys. Spectral Region Spectral Resolution ( m )Spectral Range ( m) Spatial Resolution (m)IFOV (deg) ATM VIS-NIR SWIR-TIR (tot 12 ch) variable from 24 to 3100 0.42 - 1150 2 0.143 CASI VNIR (48 ch.) 0.01 0.40-0.94 2 0.115 MIVIS VNIR (28ch.) 0.02 (VIS) 0.05 (NIR) 0.43-0.83 (VIS) 1.15-1.55 (NIR) 6 - 7 0.115 SWIR (64ch.) 0.09 1.983-2.478 TIR (10ch.) 0.34-0.54 8.180-12.700 Table 1. Characteristics of airborne sensors used for the Arpi test area. 1 References 2 [1] Beck, A., Philip, G., Abdulkarim, M. and Donoghue, D., 2007. Evaluation of Corona and Ikonos high resolution satellite imagery for archaeological prospection in western Syria. Antiquity, 81: 161-175. 3 [2] Altaweel, M., 2005. The Use of ASTER Satellite Imagery in Archaeological Contexts. Archaeological Prospection, 12: 151- 166. 4 [3] Cavalli, R.M.; Colosi, F.; Palombo, A.; Pignatti, S.; Poscolieri, M. Remote hyperspectral imagery as a support to archaeological prospection. J. of Cultural Heritage 2007, 8, 272-283. 5 [4] Kucukkaya, A.G. Photogrammetry and remote sensing in archaeology. J. Quant. Spectrosc. Radiat. Transfer 2004, 97(1-3), 83-97. [5] Rowlands, A.; Sarris, A. Detection of exposed and subsurface archaeological remains using multi-sensor remote sensing. J. of Archaeological Science 2007, 34, 795-803.

  9. Extended SWIR imaging sensors for hyperspectral imaging applications

    NASA Astrophysics Data System (ADS)

    Weber, A.; Benecke, M.; Wendler, J.; Sieck, A.; Hübner, D.; Figgemeier, H.; Breiter, R.

    2016-05-01

    AIM has developed SWIR modules including FPAs based on liquid phase epitaxy (LPE) grown MCT usable in a wide range of hyperspectral imaging applications. Silicon read-out integrated circuits (ROIC) provide various integration and readout modes including specific functions for spectral imaging applications. An important advantage of MCT based detectors is the tunable band gap. The spectral sensitivity of MCT detectors can be engineered to cover the extended SWIR spectral region up to 2.5μm without compromising in performance. AIM developed the technology to extend the spectral sensitivity of its SWIR modules also into the VIS. This has been successfully demonstrated for 384x288 and 1024x256 FPAs with 24μm pitch. Results are presented in this paper. The FPAs are integrated into compact dewar cooler configurations using different types of coolers, like rotary coolers, AIM's long life split linear cooler MCC030 or extreme long life SF100 Pulse Tube cooler. The SWIR modules include command and control electronics (CCE) which allow easy interfacing using a digital standard interface. The development status and performance results of AIM's latest MCT SWIR modules suitable for hyperspectral systems and applications will be presented.

  10. System-level analysis and design for RGB-NIR CMOS camera

    NASA Astrophysics Data System (ADS)

    Geelen, Bert; Spooren, Nick; Tack, Klaas; Lambrechts, Andy; Jayapala, Murali

    2017-02-01

    This paper presents system-level analysis of a sensor capable of simultaneously acquiring both standard absorption based RGB color channels (400-700nm, 75nm FWHM), as well as an additional NIR channel (central wavelength: 808 nm, FWHM: 30nm collimated light). Parallel acquisition of RGB and NIR info on the same CMOS image sensor is enabled by monolithic pixel-level integration of both a NIR pass thin film filter and NIR blocking filters for the RGB channels. This overcomes the need for a standard camera-level NIR blocking filter to remove the NIR leakage present in standard RGB absorption filters from 700-1000nm. Such a camera-level NIR blocking filter would inhibit the acquisition of the NIR channel on the same sensor. Thin film filters do not operate in isolation. Rather, their performance is influenced by the system context in which they operate. The spectral distribution of light arriving at the photo diode is shaped a.o. by the illumination spectral profile, optical component transmission characteristics and sensor quantum efficiency. For example, knowledge of a low quantum efficiency (QE) of the CMOS image sensor above 800nm may reduce the filter's blocking requirements and simplify the filter structure. Similarly, knowledge of the incoming light angularity as set by the objective lens' F/# and exit pupil location may be taken into account during the thin film's optimization. This paper demonstrates how knowledge of the application context can facilitate filter design and relax design trade-offs and presents experimental results.

  11. Three-dimensional estimates of tree canopies: Scaling from high-resolution UAV data to satellite observations

    NASA Astrophysics Data System (ADS)

    Sankey, T.; Donald, J.; McVay, J.

    2015-12-01

    High resolution remote sensing images and datasets are typically acquired at a large cost, which poses big a challenge for many scientists. Northern Arizona University recently acquired a custom-engineered, cutting-edge UAV and we can now generate our own images with the instrument. The UAV has a unique capability to carry a large payload including a hyperspectral sensor, which images the Earth surface in over 350 spectral bands at 5 cm resolution, and a lidar scanner, which images the land surface and vegetation in 3-dimensions. Both sensors represent the newest available technology with very high resolution, precision, and accuracy. Using the UAV sensors, we are monitoring the effects of regional forest restoration treatment efforts. Individual tree canopy width and height are measured in the field and via the UAV sensors. The high-resolution UAV images are then used to segment individual tree canopies and to derive 3-dimensional estimates. The UAV image-derived variables are then correlated to the field-based measurements and scaled to satellite-derived tree canopy measurements. The relationships between the field-based and UAV-derived estimates are then extrapolated to a larger area to scale the tree canopy dimensions and to estimate tree density within restored and control forest sites.

  12. The Importance of Post-Launch, On-Orbit Absolute Radiometric Calibration for Remote Sensing Applications

    NASA Astrophysics Data System (ADS)

    Kuester, M. A.

    2015-12-01

    Remote sensing is a powerful tool for monitoring changes on the surface of the Earth at a local or global scale. The use of data sets from different sensors across many platforms, or even a single sensor over time, can bring a wealth of information when exploring anthropogenic changes to the environment. For example, variations in crop yield and health for a specific region can be detected by observing changes in the spectral signature of the particular species under study. However, changes in the atmosphere, sun illumination and viewing geometries during image capture can result in inconsistent image data, hindering automated information extraction. Additionally, an incorrect spectral radiometric calibration will lead to false or misleading results. It is therefore critical that the data being used are normalized and calibrated on a regular basis to ensure that physically derived variables are as close to truth as is possible. Although most earth observing sensors are well-calibrated in a laboratory prior to launch, a change in the radiometric response of the system is inevitable due to thermal, mechanical or electrical effects caused during the rigors of launch or by the space environment itself. Outgassing and exposure to ultra-violet radiation will also have an effect on the sensor's filter responses. Pre-launch lamps and other laboratory calibration systems can also fall short in representing the actual output of the Sun. A presentation of the differences in the results of some example cases (e.g. geology, agriculture) derived for science variables using pre- and post-launch calibration will be presented using DigitalGlobe's WorldView-3 super spectral sensor, with bands in the visible and near infrared, as well as in the shortwave infrared. Important defects caused by an incomplete (i.e. pre-launch only) calibration will be discussed using validation data where available. In addition, the benefits of using a well-validated surface reflectance product will be presented. DigitalGlobe is committed to providing ongoing assessment of the radiometric performance of our sensors, which allows customers to get the most out of our extensive multi-sensor constellation.

  13. Remote sensing of the diffuse attenuation coefficient of ocean water. [coastal zone color scanner

    NASA Technical Reports Server (NTRS)

    Austin, R. W.

    1981-01-01

    A technique was devised which uses remotely sensed spectral radiances from the sea to assess the optical diffuse attenuation coefficient, K (lambda) of near-surface ocean water. With spectral image data from a sensor such as the coastal zone color scanner (CZCS) carried on NIMBUS-7, it is possible to rapidly compute the K (lambda) fields for large ocean areas and obtain K "images" which show synoptic, spatial distribution of this attenuation coefficient. The technique utilizes a relationship that has been determined between the value of K and the ratio of the upwelling radiances leaving the sea surface at two wavelengths. The relationship was developed to provide an algorithm for inferring K from the radiance images obtained by the CZCS, thus the wavelengths were selected from those used by this sensor, viz., 443, 520, 550 and 670 nm. The majority of the radiance arriving at the spacecraft is the result of scattering in the atmospheric and is unrelated to the radiance signal generated by the water. A necessary step in the processing of the data received by the sensor is, therefore, the effective removal of these atmospheric path radiance signals before the K algorithm is applied. Examples of the efficacy of these removal techniques are given together with examples of the spatial distributions of K in several ocean areas.

  14. Evaluation of multi-resolution satellite sensors for assessing water quality and bottom depth of Lake Garda.

    PubMed

    Giardino, Claudia; Bresciani, Mariano; Cazzaniga, Ilaria; Schenk, Karin; Rieger, Patrizia; Braga, Federica; Matta, Erica; Brando, Vittorio E

    2014-12-15

    In this study we evaluate the capabilities of three satellite sensors for assessing water composition and bottom depth in Lake Garda, Italy. A consistent physics-based processing chain was applied to Moderate Resolution Imaging Spectroradiometer (MODIS), Landsat-8 Operational Land Imager (OLI) and RapidEye. Images gathered on 10 June 2014 were corrected for the atmospheric effects with the 6SV code. The computed remote sensing reflectance (Rrs) from MODIS and OLI were converted into water quality parameters by adopting a spectral inversion procedure based on a bio-optical model calibrated with optical properties of the lake. The same spectral inversion procedure was applied to RapidEye and to OLI data to map bottom depth. In situ measurements of Rrs and of concentrations of water quality parameters collected in five locations were used to evaluate the models. The bottom depth maps from OLI and RapidEye showed similar gradients up to 7 m (r = 0.72). The results indicate that: (1) the spatial and radiometric resolutions of OLI enabled mapping water constituents and bottom properties; (2) MODIS was appropriate for assessing water quality in the pelagic areas at a coarser spatial resolution; and (3) RapidEye had the capability to retrieve bottom depth at high spatial resolution. Future work should evaluate the performance of the three sensors in different bio-optical conditions.

  15. Multi-mode Observations of Cloud-to-Ground Lightning Strokes

    NASA Astrophysics Data System (ADS)

    Smith, M. W.; Smith, B. J.; Clemenson, M. D.; Zollweg, J. D.

    2015-12-01

    We present hyper-temporal and hyper-spectral data collected using a suite of three Phantom high-speed cameras configured to observe cloud-to-ground lightning strokes. The first camera functioned as a contextual imager to show the location and structure of the strokes. The other two cameras were operated as slit-less spectrometers, with resolutions of 0.2 to 1.0 nm. The imaging camera was operated at a readout rate of 48,000 frames per second and provided an image-based trigger mechanism for the spectrometers. Each spectrometer operated at a readout rate of 400,000 frames per second. The sensors were deployed on the southern edge of Albuquerque, New Mexico and collected data over a 4 week period during the thunderstorm season in the summer of 2015. Strikes observed by the sensor suite were correlated to specific strikes recorded by the National Lightning Data Network (NLDN) and thereby geo-located. Sensor calibration factors, distance to each strike, and calculated values of atmospheric transmission were used to estimate absolute radiometric intensities for the spectral-temporal data. The data that we present show the intensity and time evolution of broadband and line emission features for both leader and return strokes. We highlight several key features and overall statistics of the observations. A companion poster describes a lightning model that is being developed at Sandia National Laboratories.

  16. Object based image analysis for the classification of the growth stages of Avocado crop, in Michoacán State, Mexico

    NASA Astrophysics Data System (ADS)

    Gao, Yan; Marpu, Prashanth; Morales Manila, Luis M.

    2014-11-01

    This paper assesses the suitability of 8-band Worldview-2 (WV2) satellite data and object-based random forest algorithm for the classification of avocado growth stages in Mexico. We tested both pixel-based with minimum distance (MD) and maximum likelihood (MLC) and object-based with Random Forest (RF) algorithm for this task. Training samples and verification data were selected by visual interpreting the WV2 images for seven thematic classes: fully grown, middle stage, and early stage of avocado crops, bare land, two types of natural forests, and water body. To examine the contribution of the four new spectral bands of WV2 sensor, all the tested classifications were carried out with and without the four new spectral bands. Classification accuracy assessment results show that object-based classification with RF algorithm obtained higher overall higher accuracy (93.06%) than pixel-based MD (69.37%) and MLC (64.03%) method. For both pixel-based and object-based methods, the classifications with the four new spectral bands (overall accuracy obtained higher accuracy than those without: overall accuracy of object-based RF classification with vs without: 93.06% vs 83.59%, pixel-based MD: 69.37% vs 67.2%, pixel-based MLC: 64.03% vs 36.05%, suggesting that the four new spectral bands in WV2 sensor contributed to the increase of the classification accuracy.

  17. Compact characterization of liquid absorption and emission spectra using linear variable filters integrated with a CMOS imaging camera

    PubMed Central

    Wan, Yuhang; Carlson, John A.; Kesler, Benjamin A.; Peng, Wang; Su, Patrick; Al-Mulla, Saoud A.; Lim, Sung Jun; Smith, Andrew M.; Dallesasse, John M.; Cunningham, Brian T.

    2016-01-01

    A compact analysis platform for detecting liquid absorption and emission spectra using a set of optical linear variable filters atop a CMOS image sensor is presented. The working spectral range of the analysis platform can be extended without a reduction in spectral resolution by utilizing multiple linear variable filters with different wavelength ranges on the same CMOS sensor. With optical setup reconfiguration, its capability to measure both absorption and fluorescence emission is demonstrated. Quantitative detection of fluorescence emission down to 0.28 nM for quantum dot dispersions and 32 ng/mL for near-infrared dyes has been demonstrated on a single platform over a wide spectral range, as well as an absorption-based water quality test, showing the versatility of the system across liquid solutions for different emission and absorption bands. Comparison with a commercially available portable spectrometer and an optical spectrum analyzer shows our system has an improved signal-to-noise ratio and acceptable spectral resolution for discrimination of emission spectra, and characterization of colored liquid’s absorption characteristics generated by common biomolecular assays. This simple, compact, and versatile analysis platform demonstrates a path towards an integrated optical device that can be utilized for a wide variety of applications in point-of-use testing and point-of-care diagnostics. PMID:27389070

  18. Compact characterization of liquid absorption and emission spectra using linear variable filters integrated with a CMOS imaging camera

    NASA Astrophysics Data System (ADS)

    Wan, Yuhang; Carlson, John A.; Kesler, Benjamin A.; Peng, Wang; Su, Patrick; Al-Mulla, Saoud A.; Lim, Sung Jun; Smith, Andrew M.; Dallesasse, John M.; Cunningham, Brian T.

    2016-07-01

    A compact analysis platform for detecting liquid absorption and emission spectra using a set of optical linear variable filters atop a CMOS image sensor is presented. The working spectral range of the analysis platform can be extended without a reduction in spectral resolution by utilizing multiple linear variable filters with different wavelength ranges on the same CMOS sensor. With optical setup reconfiguration, its capability to measure both absorption and fluorescence emission is demonstrated. Quantitative detection of fluorescence emission down to 0.28 nM for quantum dot dispersions and 32 ng/mL for near-infrared dyes has been demonstrated on a single platform over a wide spectral range, as well as an absorption-based water quality test, showing the versatility of the system across liquid solutions for different emission and absorption bands. Comparison with a commercially available portable spectrometer and an optical spectrum analyzer shows our system has an improved signal-to-noise ratio and acceptable spectral resolution for discrimination of emission spectra, and characterization of colored liquid’s absorption characteristics generated by common biomolecular assays. This simple, compact, and versatile analysis platform demonstrates a path towards an integrated optical device that can be utilized for a wide variety of applications in point-of-use testing and point-of-care diagnostics.

  19. A data base of ASAS digital imagery. [Advanced Solid-state Array Spectroradiometer

    NASA Technical Reports Server (NTRS)

    Irons, James R.; Meeson, Blanche W.; Dabney, Philip W.; Kovalick, William M.; Graham, David W.; Hahn, Daniel S.

    1992-01-01

    The Advanced Solid-State Array Spectroradiometer (ASAS) is an airborne, off-nadir tilting, imaging spectroradiometer that acquires digital image data for 29 spectral bands in the visible and near-infrared. The sensor is used principally for studies of the bidirectional distribution of solar radiation scattered by terrestial surfaces. ASAS has acquired data for a number of terrestial ecosystem field experiments and investigators have received over 170 radiometrically corrected, multiangle, digital image data sets. A database of ASAS digital imagery has been established in the Pilot Land Data System (PLDS) at the NASA/Goddard Space Flight Center to provide access to these data by the scientific community. ASAS, its processed data, and the PLDS are described, together with recent improvements to the sensor system.

  20. Multispectral Terrain Background Simulation Techniques For Use In Airborne Sensor Evaluation

    NASA Astrophysics Data System (ADS)

    Weinberg, Michael; Wohlers, Ronald; Conant, John; Powers, Edward

    1988-08-01

    A background simulation code developed at Aerodyne Research, Inc., called AERIE is designed to reflect the major sources of clutter that are of concern to staring and scanning sensors of the type being considered for various airborne threat warning (both aircraft and missiles) sensors. The code is a first principles model that could be used to produce a consistent image of the terrain for various spectral bands, i.e., provide the proper scene correlation both spectrally and spatially. The code utilizes both topographic and cultural features to model terrain, typically from DMA data, with a statistical overlay of the critical underlying surface properties (reflectance, emittance, and thermal factors) to simulate the resulting texture in the scene. Strong solar scattering from water surfaces is included with allowance for wind driven surface roughness. Clouds can be superimposed on the scene using physical cloud models and an analytical representation of the reflectivity obtained from scattering off spherical particles. The scene generator is augmented by collateral codes that allow for the generation of images at finer resolution. These codes provide interpolation of the basic DMA databases using fractal procedures that preserve the high frequency power spectral density behavior of the original scene. Scenes are presented illustrating variations in altitude, radiance, resolution, material, thermal factors, and emissivities. The basic models utilized for simulation of the various scene components and various "engineering level" approximations are incorporated to reduce the computational complexity of the simulation.

  1. The Characterization of a DIRSIG Simulation Environment to Support the Inter-Calibration of Spaceborne Sensors

    NASA Technical Reports Server (NTRS)

    Ambeau, Brittany L.; Gerace, Aaron D.; Montanaro, Matthew; McCorkel, Joel

    2016-01-01

    Climate change studies require long-term, continuous records that extend beyond the lifetime, and the temporal resolution, of a single remote sensing satellite sensor. The inter-calibration of spaceborne sensors is therefore desired to provide spatially, spectrally, and temporally homogeneous datasets. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool is a first principle-based synthetic image generation model that has the potential to characterize the parameters that impact the accuracy of the inter-calibration of spaceborne sensors. To demonstrate the potential utility of the model, we compare the radiance observed in real image data to the radiance observed in simulated image from DIRSIG. In the present work, a synthetic landscape of the Algodones Sand Dunes System is created. The terrain is facetized using a 2-meter digital elevation model generated from NASA Goddard's LiDAR, Hyperspectral, and Thermal (G-LiHT) imager. The material spectra are assigned using hyperspectral measurements of sand collected from the Algodones Sand Dunes System. Lastly, the bidirectional reflectance distribution function (BRDF) properties are assigned to the modeled terrain using the Moderate Resolution Imaging Spectroradiometer (MODIS) BRDF product in conjunction with DIRSIG's Ross-Li capability. The results of this work indicate that DIRSIG is in good agreement with real image data. The potential sources of residual error are identified and the possibilities for future work are discussed..

  2. PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.

    PubMed

    Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David

    2009-04-01

    Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.

  3. The characterization of a DIRSIG simulation environment to support the inter-calibration of spaceborne sensors

    NASA Astrophysics Data System (ADS)

    Ambeau, Brittany L.; Gerace, Aaron D.; Montanaro, Matthew; McCorkel, Joel

    2016-09-01

    Climate change studies require long-term, continuous records that extend beyond the lifetime, and the temporal resolution, of a single remote sensing satellite sensor. The inter-calibration of spaceborne sensors is therefore desired to provide spatially, spectrally, and temporally homogeneous datasets. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) tool is a first principle-based synthetic image generation model that has the potential to characterize the parameters that impact the accuracy of the inter-calibration of spaceborne sensors. To demonstrate the potential utility of the model, we compare the radiance observed in real image data to the radiance observed in simulated image from DIRSIG. In the present work, a synthetic landscape of the Algodones Sand Dunes System is created. The terrain is facetized using a 2-meter digital elevation model generated from NASA Goddard's LiDAR, Hyperspectral, and Thermal (G-LiHT) imager. The material spectra are assigned using hyperspectral measurements of sand collected from the Algodones Sand Dunes System. Lastly, the bidirectional reflectance distribution function (BRDF) properties are assigned to the modeled terrain using the Moderate Resolution Imaging Spectroradiometer (MODIS) BRDF product in conjunction with DIRSIG's Ross-Li capability. The results of this work indicate that DIRSIG is in good agreement with real image data. The potential sources of residual error are identified and the possibilities for future work are discussed.

  4. Ultraviolet Imaging with Low Cost Smartphone Sensors: Development and Application of a Raspberry Pi-Based UV Camera

    PubMed Central

    Wilkes, Thomas C.; McGonigle, Andrew J. S.; Pering, Tom D.; Taggart, Angus J.; White, Benjamin S.; Bryant, Robert G.; Willmott, Jon R.

    2016-01-01

    Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV) camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS) sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements. PMID:27782054

  5. Spectral-spatial hyperspectral image classification using super-pixel-based spatial pyramid representation

    NASA Astrophysics Data System (ADS)

    Fan, Jiayuan; Tan, Hui Li; Toomik, Maria; Lu, Shijian

    2016-10-01

    Spatial pyramid matching has demonstrated its power for image recognition task by pooling features from spatially increasingly fine sub-regions. Motivated by the concept of feature pooling at multiple pyramid levels, we propose a novel spectral-spatial hyperspectral image classification approach using superpixel-based spatial pyramid representation. This technique first generates multiple superpixel maps by decreasing the superpixel number gradually along with the increased spatial regions for labelled samples. By using every superpixel map, sparse representation of pixels within every spatial region is then computed through local max pooling. Finally, features learned from training samples are aggregated and trained by a support vector machine (SVM) classifier. The proposed spectral-spatial hyperspectral image classification technique has been evaluated on two public hyperspectral datasets, including the Indian Pines image containing 16 different agricultural scene categories with a 20m resolution acquired by AVIRIS and the University of Pavia image containing 9 land-use categories with a 1.3m spatial resolution acquired by the ROSIS-03 sensor. Experimental results show significantly improved performance compared with the state-of-the-art works. The major contributions of this proposed technique include (1) a new spectral-spatial classification approach to generate feature representation for hyperspectral image, (2) a complementary yet effective feature pooling approach, i.e. the superpixel-based spatial pyramid representation that is used for the spatial correlation study, (3) evaluation on two public hyperspectral image datasets with superior image classification performance.

  6. Application of Hymap image in the environmental survey in Shenzhen, China

    NASA Astrophysics Data System (ADS)

    Pan, Wei; Yang, Xiaomao; Chen, Xuejiao; Feng, Ping

    2017-10-01

    Hyperspectral HyMap image with synchronous in-situ spectral data were used to survey the environmental condition in Shenzhen of South China. HyMap image was measured with 3.5m spatial resolution and 15nm spectral resolution from 0.44μm-2.5μm and corrected with Modtran5 model and synchronous solar illuminance and atmospheric visibility to the ground. The spectra of rocks, soils, water and vegetation were obtained by ASD spectrometer in reflectance. Both the fresh granite and eroded sandy soil was found with absorption at 2200nm+/-in-situ spectra, but the weathered granite and sandy soil have another absorption at 880nm 940 nm. Polluted water with high ammonia nitrogen and phosphorous and BOD5 get the strongest reflectance at 550 570nm, while polluted water of high CODcr and heavy metal ions content get the peak reflectance at 450 490nm. The in-situ spectra was resampled in wavelength range and spectral resolution to that of Hymap sensor for image classification with SAM algorithm, the unpaved granite among cement the paved mine pits , the newly excavated land surface and the eroded soil was mapped out with the accuracy over 95%. We also discriminate the artificial forest from the natural with the spectral endmember extracted from the image.

  7. Complementarity of ResourceSat-1 AWiFS and Landsat TM/ETM+ sensors

    USGS Publications Warehouse

    Goward, S.N.; Chander, G.; Pagnutti, M.; Marx, A.; Ryan, R.; Thomas, N.; Tetrault, R.

    2012-01-01

    Considerable interest has been given to forming an international collaboration to develop a virtual moderate spatial resolution land observation constellation through aggregation of data sets from comparable national observatories such as the US Landsat, the Indian ResourceSat and related systems. This study explores the complementarity of India's ResourceSat-1 Advanced Wide Field Sensor (AWiFS) with the Landsat 5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+). The analysis focuses on the comparative radiometry, geometry, and spectral properties of the two sensors. Two applied assessments of these data are also explored to examine the strengths and limitations of these alternate sources of moderate resolution land imagery with specific application domains. There are significant technical differences in these imaging systems including spectral band response, pixel dimensions, swath width, and radiometric resolution which produce differences in observation data sets. None of these differences was found to strongly limit comparable analyses in agricultural and forestry applications. Overall, we found that the AWiFS and Landsat TM/ETM+ imagery are comparable and in some ways complementary, particularly with respect to temporal repeat frequency. We have found that there are limits to our understanding of the AWiFS performance, for example, multi-camera design and stability of radiometric calibration over time, that leave some uncertainty that has been better addressed for Landsat through the Image Assessment System and related cross-sensor calibration studies. Such work still needs to be undertaken for AWiFS and similar observatories that may play roles in the Global Earth Observation System of Systems Land Surface Imaging Constellation.

  8. Particle trajectories and clearing times after mechanical door openings on the MSX satellite

    NASA Astrophysics Data System (ADS)

    Green, B. David; Galica, Gary E.; Mulhall, Phillip A.; Dyer, James S.; Uy, O. Manuel

    1996-11-01

    Particles generated from spacecraft surfaces will interfere with the remote sensing of emissions from objects in space, the earth, and its upper atmosphere. We have previously reviewed the sources, sizes, and composition of particles observed in local spacecraft environments and presented predictions of the optical signatures these particles would generate and presented predictions of the signatures of these nearfield particles as detected by spacecraft optical systems. Particles leaving spacecraft surfaces will be accelerated by atmospheric drag (and magnetic forces if charged). Velocities and accelerations relative to the spacecraft x,y,z, coordinate system allow the particle to move through the optical sensors' field-of-view after they leave the spacecraft surfaces. The particle's trajectory during the optical system integration time gives rise to a particle track in the detected image. Particles can be remotely detected across the UV-IR spectral region by their thermal emission, scattered sunlight, and earthshine. The spectral-bandpass-integrated signatures of these particles (dependent upon size and composition) is then mapped back onto the UV, visible, and IR sensor systems. At distances less than kilometers, these particles are out of focus for telescoped imaging systems. The image produced is blurred over several pixels. We present here data on the optical signatures observed after the mechanical doors covering the MSX primary optical sensors are removed. This data represents the first observations by these sensors on-orbit, and must be treated as preliminary until a more careful review and calibration is completed. Within these constraints, we have analyzed the data to derive preliminarily positions and trajectories.

  9. NIR hyperspectral compressive imager based on a modified Fabry–Perot resonator

    NASA Astrophysics Data System (ADS)

    Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Stern, Adrian

    2018-04-01

    The acquisition of hyperspectral (HS) image datacubes with available 2D sensor arrays involves a time consuming scanning process. In the last decade, several compressive sensing (CS) techniques were proposed to reduce the HS acquisition time. In this paper, we present a method for near-infrared (NIR) HS imaging which relies on our rapid CS resonator spectroscopy technique. Within the framework of CS, and by using a modified Fabry–Perot resonator, a sequence of spectrally modulated images is used to recover NIR HS datacubes. Owing to the innovative CS design, we demonstrate the ability to reconstruct NIR HS images with hundreds of spectral bands from an order of magnitude fewer measurements, i.e. with a compression ratio of about 10:1. This high compression ratio, together with the high optical throughput of the system, facilitates fast acquisition of large HS datacubes.

  10. Flight model performances of HISUI hyperspectral sensor onboard ISS (International Space Station)

    NASA Astrophysics Data System (ADS)

    Tanii, Jun; Kashimura, Osamu; Ito, Yoshiyuki; Iwasaki, Akira

    2016-10-01

    Hyperspectral Imager Suite (HISUI) is a next-generation Japanese sensor that will be mounted on Japanese Experiment Module (JEM) of ISS (International Space Station) in 2019 as timeframe. HISUI hyperspectral sensor obtains spectral images of 185 bands with the ground sampling distance of 20x31 meter from the visible to shortwave-infrared region. The sensor system is the follow-on mission of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) in the visible to shortwave infrared region. The critical design review of the instrument was accomplished in 2014. Integration and tests of an flight model of HISUI hyperspectral sensor is being carried out. Simultaneously, the development of JEM-External Facility (EF) Payload system for the instrument started. The system includes the structure, the thermal control system, the electrical system and the pointing mechanism. The development status and the performances including some of the tests results of Instrument flight model, such as optical performance, optical distortion and radiometric performance are reported.

  11. Flight model of HISUI hyperspectral sensor onboard ISS (International Space Station)

    NASA Astrophysics Data System (ADS)

    Tanii, Jun; Kashimura, Osamu; Ito, Yoshiyuki; Iwasaki, Akira

    2017-09-01

    Hyperspectral Imager Suite (HISUI) is a next-generation Japanese sensor that will be mounted on Japanese Experiment Module (JEM) of ISS (International Space Station) in 2019 as timeframe. HISUI hyperspectral sensor obtains spectral images of 185 bands with the ground sampling distance of 20x31 meter from the visible to shortwave-infrared wavelength region. The sensor is the follow-on mission of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) in the visible to shortwave infrared region. The critical design review of the instrument was accomplished in 2014. Integration and tests of a Flight Model (FM) of HISUI hyperspectral sensor have been completed in the beginning of 2017. Simultaneously, the development of JEMExternal Facility (EF) Payload system for the instrument is being carried out. The system includes the structure, the thermal control sub-system and the electrical sub-system. The tests results of flight model, such as optical performance, optical distortion and radiometric performance are reported.

  12. Sensor Webs: Autonomous Rapid Response to Monitor Transient Science Events

    NASA Technical Reports Server (NTRS)

    Mandl, Dan; Grosvenor, Sandra; Frye, Stu; Sherwood, Robert; Chien, Steve; Davies, Ashley; Cichy, Ben; Ingram, Mary Ann; Langley, John; Miranda, Felix

    2005-01-01

    To better understand how physical phenomena, such as volcanic eruptions, evolve over time, multiple sensor observations over the duration of the event are required. Using sensor web approaches that integrate original detections by in-situ sensors and global-coverage, lower-resolution, on-orbit assets with automated rapid response observations from high resolution sensors, more observations of significant events can be made with increased temporal, spatial, and spectral resolution. This paper describes experiments using Earth Observing 1 (EO-1) along with other space and ground assets to implement progressive mission autonomy to identify, locate and image with high resolution instruments phenomena such as wildfires, volcanoes, floods and ice breakup. The software that plans, schedules and controls the various satellite assets are used to form ad hoc constellations which enable collaborative autonomous image collections triggered by transient phenomena. This software is both flight and ground based and works in concert to run all of the required assets cohesively and includes software that is model-based, artificial intelligence software.

  13. Spectral methods to detect surface mines

    NASA Astrophysics Data System (ADS)

    Winter, Edwin M.; Schatten Silvious, Miranda

    2008-04-01

    Over the past five years, advances have been made in the spectral detection of surface mines under minefield detection programs at the U. S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD). The problem of detecting surface land mines ranges from the relatively simple, the detection of large anti-vehicle mines on bare soil, to the very difficult, the detection of anti-personnel mines in thick vegetation. While spatial and spectral approaches can be applied to the detection of surface mines, spatial-only detection requires many pixels-on-target such that the mine is actually imaged and shape-based features can be exploited. This method is unreliable in vegetated areas because only part of the mine may be exposed, while spectral detection is possible without the mine being resolved. At NVESD, hyperspectral and multi-spectral sensors throughout the reflection and thermal spectral regimes have been applied to the mine detection problem. Data has been collected on mines in forest and desert regions and algorithms have been developed both to detect the mines as anomalies and to detect the mines based on their spectral signature. In addition to the detection of individual mines, algorithms have been developed to exploit the similarities of mines in a minefield to improve their detection probability. In this paper, the types of spectral data collected over the past five years will be summarized along with the advances in algorithm development.

  14. Digital imaging and remote sensing image generator (DIRSIG) as applied to NVESD sensor performance modeling

    NASA Astrophysics Data System (ADS)

    Kolb, Kimberly E.; Choi, Hee-sue S.; Kaur, Balvinder; Olson, Jeffrey T.; Hill, Clayton F.; Hutchinson, James A.

    2016-05-01

    The US Army's Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (referred to as NVESD) is developing a virtual detection, recognition, and identification (DRI) testing methodology using simulated imagery as a means of augmenting the field testing component of sensor performance evaluation, which is expensive, resource intensive, time consuming, and limited to the available target(s) and existing atmospheric visibility and environmental conditions at the time of testing. Existing simulation capabilities such as the Digital Imaging Remote Sensing Image Generator (DIRSIG) and NVESD's Integrated Performance Model Image Generator (NVIPM-IG) can be combined with existing detection algorithms to reduce cost/time, minimize testing risk, and allow virtual/simulated testing using full spectral and thermal object signatures, as well as those collected in the field. NVESD has developed an end-to-end capability to demonstrate the feasibility of this approach. Simple detection algorithms have been used on the degraded images generated by NVIPM-IG to determine the relative performance of the algorithms on both DIRSIG-simulated and collected images. Evaluating the degree to which the algorithm performance agrees between simulated versus field collected imagery is the first step in validating the simulated imagery procedure.

  15. Hyperspectral and multispectral satellite sensors for mapping chlorophyll content in a Mediterranean Pinus sylvestris L. plantation

    NASA Astrophysics Data System (ADS)

    Navarro-Cerrillo, Rafael Mª; Trujillo, Jesus; de la Orden, Manuel Sánchez; Hernández-Clemente, Rocío

    2014-02-01

    A new generation of narrow-band hyperspectral remote sensing data offers an alternative to broad-band multispectral data for the estimation of vegetation chlorophyll content. This paper examines the potential of some of these sensors comparing red-edge and simple ratio indices to develop a rapid and cost-effective system for monitoring Mediterranean pine plantations in Spain. Chlorophyll content retrieval was analyzed with the red-edge R750/R710 index and the simple ratio R800/R560 index using the PROSPECT-5 leaf model and the Discrete Anisotropic Radiative Transfer (DART) and experimental approach. Five sensors were used: AHS, CHRIS/Proba, Hyperion, Landsat and QuickBird. The model simulation results obtained with synthetic spectra demonstrated the feasibility of estimating Ca + b content in conifers using the simple ratio R800/R560 index formulated with different full widths at half maximum (FWHM) at the leaf level. This index yielded a r2 = 0.69 for a FWHM of 30 nm and r2 = 0.55 for a FWHM of 70 nm. Experimental results compared the regression coefficients obtained with various multispectral and hyperspectral images with different spatial resolutions at the stand level. The strongest relationships where obtained using high-resolution hyperspectral images acquired with the AHS sensor (r2 = 0.65) while coarser spatial and spectral resolution images yielded a lower root mean square error (QuickBird r2 = 0.42; Landsat r2 = 0.48; Hyperion r2 = 0.56; CHRIS/Proba r2 = 0.57). This study shows the need to estimate chlorophyll content in forest plantations at the stand level with high spatial and spectral resolution sensors. Nevertheless, these results also show the accuracy obtained with medium-resolution sensors when monitoring physiological processes. Generating biochemical maps at the stand level could play a critical rule in the early detection of forest decline processes enabling their use in precision forestry.

  16. Comparison between various patch wise strategies for reconstruction of ultra-spectral cubes captured with a compressive sensing system

    NASA Astrophysics Data System (ADS)

    Oiknine, Yaniv; August, Isaac Y.; Revah, Liat; Stern, Adrian

    2016-05-01

    Recently we introduced a Compressive Sensing Miniature Ultra-Spectral Imaging (CS-MUSI) system. The system is based on a single Liquid Crystal (LC) cell and a parallel sensor array where the liquid crystal cell performs spectral encoding. Within the framework of compressive sensing, the CS-MUSI system is able to reconstruct ultra-spectral cubes captured with only an amount of ~10% samples compared to a conventional system. Despite the compression, the technique is extremely complex computationally, because reconstruction of ultra-spectral images requires processing huge data cubes of Gigavoxel size. Fortunately, the computational effort can be alleviated by using separable operation. An additional way to reduce the reconstruction effort is to perform the reconstructions on patches. In this work, we consider processing on various patch shapes. We present an experimental comparison between various patch shapes chosen to process the ultra-spectral data captured with CS-MUSI system. The patches may be one dimensional (1D) for which the reconstruction is carried out spatially pixel-wise, or two dimensional (2D) - working on spatial rows/columns of the ultra-spectral cube, as well as three dimensional (3D).

  17. Fusion of Modis and Palsar Principal Component Images Through Curvelet Transform for Land Cover Classification

    NASA Astrophysics Data System (ADS)

    Singh, Dharmendra; Kumar, Harish

    Earth observation satellites provide data that covers different portions of the electromagnetic spectrum at different spatial and spectral resolutions. The increasing availability of information products generated from satellite images are extending the ability to understand the patterns and dynamics of the earth resource systems at all scales of inquiry. In which one of the most important application is the generation of land cover classification from satellite images for understanding the actual status of various land cover classes. The prospect for the use of satel-lite images in land cover classification is an extremely promising one. The quality of satellite images available for land-use mapping is improving rapidly by development of advanced sensor technology. Particularly noteworthy in this regard is the improved spatial and spectral reso-lution of the images captured by new satellite sensors like MODIS, ASTER, Landsat 7, and SPOT 5. For the full exploitation of increasingly sophisticated multisource data, fusion tech-niques are being developed. Fused images may enhance the interpretation capabilities. The images used for fusion have different temporal, and spatial resolution. Therefore, the fused image provides a more complete view of the observed objects. It is one of the main aim of image fusion to integrate different data in order to obtain more information that can be de-rived from each of the single sensor data alone. A good example of this is the fusion of images acquired by different sensors having a different spatial resolution and of different spectral res-olution. Researchers are applying the fusion technique since from three decades and propose various useful methods and techniques. The importance of high-quality synthesis of spectral information is well suited and implemented for land cover classification. More recently, an underlying multiresolution analysis employing the discrete wavelet transform has been used in image fusion. It was found that multisensor image fusion is a tradeoff between the spectral information from a low resolution multi-spectral images and the spatial information from a high resolution multi-spectral images. With the wavelet transform based fusion method, it is easy to control this tradeoff. A new transform, the curvelet transform was used in recent years by Starck. A ridgelet transform is applied to square blocks of detail frames of undecimated wavelet decomposition, consequently the curvelet transform is obtained. Since the ridgelet transform possesses basis functions matching directional straight lines therefore, the curvelet transform is capable of representing piecewise linear contours on multiple scales through few significant coefficients. This property leads to a better separation between geometric details and background noise, which may be easily reduced by thresholding curvelet coefficients before they are used for fusion. The Terra and Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) instrument provides high radiometric sensitivity (12 bit) in 36 spectral bands ranging in wavelength from 0.4 m to 14.4 m and also it is freely available. Two bands are imaged at a nominal resolution of 250 m at nadir, with five bands at 500 m, and the remaining 29 bands at 1 km. In this paper, the band 1 of spatial resolution 250 m and bandwidth 620-670 nm, and band 2, of spatial resolution of 250m and bandwidth 842-876 nm is considered as these bands has special features to identify the agriculture and other land covers. In January 2006, the Advanced Land Observing Satellite (ALOS) was successfully launched by the Japan Aerospace Exploration Agency (JAXA). The Phased Arraytype L-band SAR (PALSAR) sensor onboard the satellite acquires SAR imagery at a wavelength of 23.5 cm (frequency 1.27 GHz) with capabilities of multimode and multipolarization observation. PALSAR can operate in several modes: the fine-beam single (FBS) polarization mode (HH), fine-beam dual (FBD) polariza-tion mode (HH/HV or VV/VH), polarimetric (PLR) mode (HH/HV/VH/VV), and ScanSAR (WB) mode (HH/VV) [15]. These makes PALSAR imagery very attractive for spatially and temporally consistent monitoring system. The Overview of Principal Component Analysis is that the most of the information within all the bands can be compressed into a much smaller number of bands with little loss of information. It allows us to extract the low-dimensional subspaces that capture the main linear correlation among the high-dimensional image data. This facilitates viewing the explained variance or signal in the available imagery, allowing both gross and more subtle features in the imagery to be seen. In this paper we have explored the fusion technique for enhancing the land cover classification of low resolution satellite data espe-cially freely available satellite data. For this purpose, we have considered to fuse the PALSAR principal component data with MODIS principal component data. Initially, the MODIS band 1 and band 2 is considered, its principal component is computed. Similarly the PALSAR HH, HV and VV polarized data are considered, and there principal component is also computed. con-sequently, the PALSAR principal component image is fused with MODIS principal component image. The aim of this paper is to analyze the effect of classification accuracy on major type of land cover types like agriculture, water and urban bodies with fusion of PALSAR data to MODIS data. Curvelet transformation has been applied for fusion of these two satellite images and Minimum Distance classification technique has been applied for the resultant fused image. It is qualitatively and visually observed that the overall classification accuracy of MODIS image after fusion is enhanced. This type of fusion technique may be quite helpful in near future to use freely available satellite data to develop monitoring system for different land cover classes on the earth.

  18. Usability of multiangular imaging spectroscopy data for analysis of vegetation canopy shadow fraction in boreal forest

    NASA Astrophysics Data System (ADS)

    Markiet, Vincent; Perheentupa, Viljami; Mõttus, Matti; Hernández-Clemente, Rocío

    2016-04-01

    Imaging spectroscopy is a remote sensing technology which records continuous spectral data at a very high (better than 10 nm) resolution. Such spectral images can be used to monitor, for example, the photosynthetic activity of vegetation. Photosynthetic activity is dependent on varying light conditions and varies within the canopy. To measure this variation we need very high spatial resolution data with resolution better than the dominating canopy element size (e.g., tree crown in a forest canopy). This is useful, e.g., for detecting photosynthetic downregulation and thus plant stress. Canopy illumination conditions are often quantified using the shadow fraction: the fraction of visible foliage which is not sunlit. Shadow fraction is known to depend on view angle (e.g., hot spot images have very low shadow fraction). Hence, multiple observation angles potentially increase the range of shadow fraction in the imagery in high spatial resolution imaging spectroscopy data. To investigate the potential of multi-angle imaging spectroscopy in investigating canopy processes which vary with shadow fraction, we obtained a unique multiangular airborne imaging spectroscopy data for the Hyytiälä forest research station located in Finland (61° 50'N, 24° 17'E) in July 2015. The main tree species are Norway spruce (Picea abies L. karst), Scots pine (Pinus sylvestris L.) and birch (Betula pubescens Ehrh., Betula pendula Roth). We used an airborne hyperspectral sensor AISA Eagle II (Specim - Spectral Imaging Ltd., Finland) mounted on a tilting platform. The tilting platform allowed us to measure at nadir and approximately 35 degrees off-nadir. The hyperspectral sensor has a 37.5 degrees field of view (FOV), 0.6m pixel size, 128 spectral bands with an average spectral bandwidth of 4.6nm and is sensitive in the 400-1000 nm spectral region. The airborne data was radiometrically, atmospherically and geometrically processed using the Parge and Atcor software (Re Se applications Schläpfer, Switzerland). However, even after meticulous geolocation, the canopy elements (needles) seen from the three view angles were different: at each overpass, different parts of the same crowns were observed. To overcome this, we used a 200m x 200m test site covered with pure pine stands. We assumed that for sunlit, shaded and understory spectral signatures are independent of viewing direction to the accuracy of a constant BRDF factor. Thus, we compared the spectral signatures for sunlit and shaded canopy and understory obtained for each view direction. We selected visually six hundred of the brightest and darkest canopy pixels. Next, we performed a minimum noise fraction (MNF) transformation, created a pixel purity index (PPI) and used Envi's n-D scatterplot to determine pure spectral signatures for the two classes. The pure endmembers for different view angles were compared to determine the BRDF factor and to analyze its spectral invariance. We demonstrate the compatibility of multi-angle data with high spatial resolution data. In principle, both carry similar information on structured (non-flat) targets thus as a vegetation canopy. Nevertheless, multiple view angles helped us to extend the range of shadow fraction in the images. Also, correct separation of shaded crown and shaded understory pixels remains a challenge.

  19. Techniques for identifying dust devils in mars pathfinder images

    USGS Publications Warehouse

    Metzger, S.M.; Carr, J.R.; Johnson, J. R.; Parker, T.J.; Lemmon, M.T.

    2000-01-01

    Image processing methods used to identify and enhance dust devil features imaged by IMP (Imager for Mars Pathfinder) are reviewed. Spectral differences, visible red minus visible blue, were used for initial dust devil searches, driven by the observation that Martian dust has high red and low blue reflectance. The Martian sky proved to be more heavily dust-laden than pre-Pathfinder predictions, based on analysis of images from the Hubble Space Telescope. As a result, these initial spectral difference methods failed to contrast dust devils with background dust haze. Imager artifacts (dust motes on the camera lens, flat-field effects caused by imperfections in the CCD, and projection onto a flat sensor plane by a convex lens) further impeded the ability to resolve subtle dust devil features. Consequently, reference images containing sky with a minimal horizon were first subtracted from each spectral filter image to remove camera artifacts and reduce the background dust haze signal. Once the sky-flat preprocessing step was completed, the red-minus-blue spectral difference scheme was attempted again. Dust devils then were successfully identified as bright plumes. False-color ratios using calibrated IMP images were found useful for visualizing dust plumes, verifying initial discoveries as vortex-like features. Enhancement of monochromatic (especially blue filter) images revealed dust devils as silhouettes against brighter background sky. Experiments with principal components transformation identified dust devils in raw, uncalibrated IMP images and further showed relative movement of dust devils across the Martian surface. A variety of methods therefore served qualitative and quantitative goals for dust plume identification and analysis in an environment where such features are obscure.

  20. Diagnosis and Repair of Random Noise in the SENSOR'S Chris-Proba

    NASA Astrophysics Data System (ADS)

    Mobasheri, M. R.; Zendehbad, S. A.

    2013-09-01

    The CHRIS sensor on the PROBA-1 satellite has imaged as push-broom way, 18 meter spatial resolution and 18 bands (1.25-11 nm) spectral resolution from earth since 2001. After 13 years of the life of the sensor because of many reasons including the influence of solar radiation and magnetic fields of Earth and Sun, behaviour of the response function of the detector exit from calibration mode and performance of some CCDs has failed. This has caused some image information in some bands have been deleted or invalid. In some images, some dark streaks or light bands in different locations need to be created to identify and correct. In this paper all type of noise which likely impact on sensor data by CHRIS from record and transmission identified, calculated and formulated and method is presented through modifying. To do this we use the In-fight and On-ground measurements parameters. Otherwise creation of noise in images is divided into horizontal and vertical noise. Due to the random noise is created in different bands and different locations, those images in which noise is observed is used. In this paper, techniques to identify and correct the dark or pale stripe detail of the images are created. Finally, the noisy images were compared before and after the reform and effective algorithms to detect and correct errors were demonstrated.

  1. Landsat 8 on-orbit characterization and calibration system

    USGS Publications Warehouse

    Micijevic, Esad; Morfitt, Ron; Choate, Michael J.

    2011-01-01

    The Landsat Data Continuity Mission (LDCM) is planning to launch the Landsat 8 satellite in December 2012, which continues an uninterrupted record of consistently calibrated globally acquired multispectral images of the Earth started in 1972. The satellite will carry two imaging sensors: the Operational Land Imager (OLI) and the Thermal Infrared Sensor (TIRS). The OLI will provide visible, near-infrared and short-wave infrared data in nine spectral bands while the TIRS will acquire thermal infrared data in two bands. Both sensors have a pushbroom design and consequently, each has a large number of detectors to be characterized. Image and calibration data downlinked from the satellite will be processed by the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center using the Landsat 8 Image Assessment System (IAS), a component of the Ground System. In addition to extracting statistics from all Earth images acquired, the IAS will process and trend results from analysis of special calibration acquisitions, such as solar diffuser, lunar, shutter, night, lamp and blackbody data, and preselected calibration sites. The trended data will be systematically processed and analyzed, and calibration and characterization parameters will be updated using both automatic and customized manual tools. This paper describes the analysis tools and the system developed to monitor and characterize on-orbit performance and calibrate the Landsat 8 sensors and image data products.

  2. Calibrated infrared ground/air radiometric spectrometer

    NASA Astrophysics Data System (ADS)

    Silk, J. K.; Schildkraut, Elliot Robert; Bauldree, Russell S.; Goodrich, Shawn M.

    1996-06-01

    The calibrated infrared ground/air radiometric spectrometer (CIGARS) is a new high performance, multi-purpose, multi- platform Fourier transform spectrometer (FPS) sensor. It covers the waveband from 0.2 to 12 micrometer, has spectral resolution as fine as 0.3 cm-1, and records over 100 spectra per second. Two CIGARS units are being used for observations of target signatures in the air or on the ground from fixed or moving platforms, including high performance jet aircraft. In this paper we describe the characteristics and capabilities of the CIGARS sensor, which uses four interchangeable detector modules (Si, InGaAs, InSb, and HgCdTe) and two optics modules, with internal calibration. The data recording electronics support observations of transient events, even without precise information on the timing of the event. We present test and calibration data on the sensitivity, spectral resolution, stability, and spectral rate of CIGARS, and examples of in- flight observations of real targets. We also discuss plans for adapting CIGARS for imaging spectroscopy observations, with simultaneous spectral and spatial data, by replacing the existing detectors with a focal plane array (FPA).

  3. Arrays of Nano Tunnel Junctions as Infrared Image Sensors

    NASA Technical Reports Server (NTRS)

    Son, Kyung-Ah; Moon, Jeong S.; Prokopuk, Nicholas

    2006-01-01

    Infrared image sensors based on high density rectangular planar arrays of nano tunnel junctions have been proposed. These sensors would differ fundamentally from prior infrared sensors based, variously, on bolometry or conventional semiconductor photodetection. Infrared image sensors based on conventional semiconductor photodetection must typically be cooled to cryogenic temperatures to reduce noise to acceptably low levels. Some bolometer-type infrared sensors can be operated at room temperature, but they exhibit low detectivities and long response times, which limit their utility. The proposed infrared image sensors could be operated at room temperature without incurring excessive noise, and would exhibit high detectivities and short response times. Other advantages would include low power demand, high resolution, and tailorability of spectral response. Neither bolometers nor conventional semiconductor photodetectors, the basic detector units as proposed would partly resemble rectennas. Nanometer-scale tunnel junctions would be created by crossing of nanowires with quantum-mechanical-barrier layers in the form of thin layers of electrically insulating material between them (see figure). A microscopic dipole antenna sized and shaped to respond maximally in the infrared wavelength range that one seeks to detect would be formed integrally with the nanowires at each junction. An incident signal in that wavelength range would become coupled into the antenna and, through the antenna, to the junction. At the junction, the flow of electrons between the crossing wires would be dominated by quantum-mechanical tunneling rather than thermionic emission. Relative to thermionic emission, quantum mechanical tunneling is a fast process.

  4. Designing a Practical System for Spectral Imaging of Skylight

    DTIC Science & Technology

    2005-09-20

    Commission Interna- tionale de l’Éclairage), include CIELUV , CIELAB , CIE94, and CIEDE2000.21,22 These metrics quantify distances in their respective...thresholds for RMSE and CIEDE2000 metrics when searching for optimum sensors; Hernández-Andrés et al.1 used GFC, CIELUV , and IIE(%) in a similar way. As...once. We use GFC as a spectral metric, CIELAB as a col- orimetric cost function (denoted by E*ab, the dis- tance between two colors in the CIE’s uniform

  5. Analysis of Active Sensor Discrimination Requirements for Various Defense Missile Defense Scenarios Final Report 1999(99-ERD-080)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ledebuhr, A.G.; Ng, L.C.; Gaughan, R.J.

    2000-02-15

    During FY99, we have explored and analyzed a combined passive/active sensor concept to support the advanced discrimination requirements for various missile defense scenario. The idea is to combine multiple IR spectral channels with an imaging LIDAR (Light Detection and Ranging) behind a common optical system. The imaging LIDAR would itself consist of at least two channels; one at the fundamental laser wavelength (e.g., the 1.064 {micro}m for Nd:YAG) and one channel at the frequency doubled (at 532 nm for Nd:YAG). two-color laser output would, for example, allow the longer wavelength for a direct detection time of flight ranger and anmore » active imaging channel at the shorter wavelength. The LIDAR can function as a high-resolution 2D spatial image either passively or actively with laser illumination. Advances in laser design also offer three color (frequency tripled) systems, high rep-rate operation, better pumping efficiencies that can provide longer distance acquisition, and ranging for enhanced discrimination phenomenology. New detector developments can enhance the performance and operation of both LIDAR channels. A real time data fusion approach that combines multi-spectral IR phenomenology with LIDAR imagery can improve both discrimination and aim-point selection capability.« less

  6. Evaluation of multispectral plenoptic camera

    NASA Astrophysics Data System (ADS)

    Meng, Lingfei; Sun, Ting; Kosoglow, Rich; Berkner, Kathrin

    2013-01-01

    Plenoptic cameras enable capture of a 4D lightfield, allowing digital refocusing and depth estimation from data captured with a compact portable camera. Whereas most of the work on plenoptic camera design has been based a simplistic geometric-optics-based characterization of the optical path only, little work has been done of optimizing end-to-end system performance for a specific application. Such design optimization requires design tools that need to include careful parameterization of main lens elements, as well as microlens array and sensor characteristics. In this paper we are interested in evaluating the performance of a multispectral plenoptic camera, i.e. a camera with spectral filters inserted into the aperture plane of the main lens. Such a camera enables single-snapshot spectral data acquisition.1-3 We first describe in detail an end-to-end imaging system model for a spectrally coded plenoptic camera that we briefly introduced in.4 Different performance metrics are defined to evaluate the spectral reconstruction quality. We then present a prototype which is developed based on a modified DSLR camera containing a lenslet array on the sensor and a filter array in the main lens. Finally we evaluate the spectral reconstruction performance of a spectral plenoptic camera based on both simulation and measurements obtained from the prototype.

  7. Improvements to an earth observing statistical performance model with applications to LWIR spectral variability

    NASA Astrophysics Data System (ADS)

    Zhao, Runchen; Ientilucci, Emmett J.

    2017-05-01

    Hyperspectral remote sensing systems provide spectral data composed of hundreds of narrow spectral bands. Spectral remote sensing systems can be used to identify targets, for example, without physical interaction. Often it is of interested to characterize the spectral variability of targets or objects. The purpose of this paper is to identify and characterize the LWIR spectral variability of targets based on an improved earth observing statistical performance model, known as the Forecasting and Analysis of Spectroradiometric System Performance (FASSP) model. FASSP contains three basic modules including a scene model, sensor model and a processing model. Instead of using mean surface reflectance only as input to the model, FASSP transfers user defined statistical characteristics of a scene through the image chain (i.e., from source to sensor). The radiative transfer model, MODTRAN, is used to simulate the radiative transfer based on user defined atmospheric parameters. To retrieve class emissivity and temperature statistics, or temperature / emissivity separation (TES), a LWIR atmospheric compensation method is necessary. The FASSP model has a method to transform statistics in the visible (ie., ELM) but currently does not have LWIR TES algorithm in place. This paper addresses the implementation of such a TES algorithm and its associated transformation of statistics.

  8. Athena microscopic Imager investigation

    USGS Publications Warehouse

    Herkenhoff, K. E.; Squyres, S. W.; Bell, J.F.; Maki, J.N.; Arneson, H.M.; Bertelsen, P.; Brown, D.I.; Collins, S.A.; Dingizian, A.; Elliott, S.T.; Goetz, W.; Hagerott, E.C.; Hayes, A.G.; Johnson, M.J.; Kirk, R.L.; McLennan, S.; Morris, R.V.; Scherr, L.M.; Schwochert, M.A.; Shiraishi, L.R.; Smith, G.H.; Soderblom, L.A.; Sohl-Dickstein, J. N.; Wadsworth, M.V.

    2003-01-01

    The Athena science payload on the Mars Exploration Rovers (MER) includes the Microscopic Imager (MI). The MI is a fixed-focus camera mounted on the end of an extendable instrument arm, the Instrument Deployment Device (IDD). The MI was designed to acquire images at a spatial resolution of 30 microns/pixel over a broad spectral range (400-700 nm). The MI uses the same electronics design as the other MER cameras but has optics that yield a field of view of 31 ?? 31 mm across a 1024 ?? 1024 pixel CCD image. The MI acquires images using only solar or skylight illumination of the target surface. A contact sensor is used to place the MI slightly closer to the target surface than its best focus distance (about 66 mm), allowing concave surfaces to be imaged in good focus. Coarse focusing (???2 mm precision) is achieved by moving the IDD away from a rock target after the contact sensor has been activated. The MI optics are protected from the Martian environment by a retractable dust cover. The dust cover includes a Kapton window that is tinted orange to restrict the spectral bandpass to 500-700 nm, allowing color information to be obtained by taking images with the dust cover open and closed. MI data will be used to place other MER instrument data in context and to aid in petrologic and geologic interpretations of rocks and soils on Mars. Copyright 2003 by the American Geophysical Union.

  9. Land cover mapping at Alkali Flat and Lake Lucero, White Sands, New Mexico, USA using multi-temporal and multi-spectral remote sensing data

    NASA Astrophysics Data System (ADS)

    Ghrefat, Habes A.; Goodell, Philip C.

    2011-08-01

    The goal of this research is to map land cover patterns and to detect changes that occurred at Alkali Flat and Lake Lucero, White Sands using multispectral Landsat 7 Enhanced Thematic Mapper Plus (ETM+), Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Advanced Land Imager (ALI), and hyperspectral Hyperion and Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data. The other objectives of this study were: (1) to evaluate the information dimensionality limits of Landsat 7 ETM+, ASTER, ALI, Hyperion, and AVIRIS data with respect to signal-to-noise and spectral resolution, (2) to determine the spatial distribution and fractional abundances of land cover endmembers, and (3) to check ground correspondence with satellite data. A better understanding of the spatial and spectral resolution of these sensors, optimum spectral bands and their information contents, appropriate image processing methods, spectral signatures of land cover classes, and atmospheric effects are needed to our ability to detect and map minerals from space. Image spectra were validated using samples collected from various localities across Alkali Flat and Lake Lucero. These samples were measured in the laboratory using VNIR-SWIR (0.4-2.5 μm) spectra and X-ray Diffraction (XRD) method. Dry gypsum deposits, wet gypsum deposits, standing water, green vegetation, and clastic alluvial sediments dominated by mixtures of ferric iron (ferricrete) and calcite were identified in the study area using Minimum Noise Fraction (MNF), Pixel Purity Index (PPI), and n-D Visualization. The results of MNF confirm that AVIRIS and Hyperion data have higher information dimensionality thresholds exceeding the number of available bands of Landsat 7 ETM+, ASTER, and ALI data. ASTER and ALI data can be a reasonable alternative to AVIRIS and Hyperion data for the purpose of monitoring land cover, hydrology and sedimentation in the basin. The spectral unmixing analysis and dimensionality eigen analysis between the various datasets helped to uncover the most optimum spatial-spectral-temporal and radiometric-resolution sensor characteristics for remote sensing based on monitoring of seasonal land cover, surface water, groundwater, and alluvial sediment input changes within the basin. The results demonstrated good agreement between ground truth data and XRD analysis of samples, and the results of Matched Filtering (MF) mapping method.

  10. Development of a handheld widefield hyperspectral imaging (HSI) sensor for standoff detection of explosive, chemical, and narcotic residues

    NASA Astrophysics Data System (ADS)

    Nelson, Matthew P.; Basta, Andrew; Patil, Raju; Klueva, Oksana; Treado, Patrick J.

    2013-05-01

    The utility of Hyper Spectral Imaging (HSI) passive chemical detection employing wide field, standoff imaging continues to be advanced in detection applications. With a drive for reduced SWaP (Size, Weight, and Power), increased speed of detection and sensitivity, developing a handheld platform that is robust and user-friendly increases the detection capabilities of the end user. In addition, easy to use handheld detectors could improve the effectiveness of locating and identifying threats while reducing risks to the individual. ChemImage Sensor Systems (CISS) has developed the HSI Aperio™ sensor for real time, wide area surveillance and standoff detection of explosives, chemical threats, and narcotics for use in both government and commercial contexts. Employing liquid crystal tunable filter technology, the HSI system has an intuitive user interface that produces automated detections and real-time display of threats with an end user created library of threat signatures that is easily updated allowing for new hazardous materials. Unlike existing detection technologies that often require close proximity for sensing and so endanger operators and costly equipment, the handheld sensor allows the individual operator to detect threats from a safe distance. Uses of the sensor include locating production facilities of illegal drugs or IEDs by identification of materials on surfaces such as walls, floors, doors, deposits on production tools and residue on individuals. In addition, the sensor can be used for longer-range standoff applications such as hasty checkpoint or vehicle inspection of residue materials on surfaces or bulk material identification. The CISS Aperio™ sensor has faster data collection, faster image processing, and increased detection capability compared to previous sensors.

  11. Remote sensing of canopy chemistry and nitrogen cycling in temperate forest ecosystems

    NASA Technical Reports Server (NTRS)

    Wessman, Carol A.; Aber, John D.; Peterson, David L.; Melillo, Jerry M.

    1988-01-01

    The use of images acquired by the Airborne Imaging Spectrometer, an experimental high-spectral resolution imaging sensor developed by NASA, to estimate the lignin concentration of whole forest canopies in Wisconsin is reported. The observed strong relationship between canopy lignin concentration and nitrogen availability in seven undisturbed forest ecosystems on Blackhawk Island, Wisconsin, suggests that canopy lignin may serve as an index for site nitrogen status. This predictive relationship presents the opportunity to estimate nitrogen-cycling rates across forested landscapes through remote sensing.

  12. Development of CCD imaging sensors for space applications, phase 1

    NASA Technical Reports Server (NTRS)

    Antcliffe, G. A.

    1975-01-01

    The results of an experimental investigation to develop a large area charge coupled device (CCD) imager for space photography applications are described. Details of the design and processing required to achieve 400 X 400 imagers are presented together with a discussion of the optical characterization techniques developed for this program. A discussion of several aspects of large CCD performance is given with detailed test reports. The areas covered include dark current, uniformity of optical response, square wave amplitude response, spectral responsivity and dynamic range.

  13. Terrain Extraction by Integrating Terrestrial Laser Scanner Data and Spectral Information

    NASA Astrophysics Data System (ADS)

    Lau, C. L.; Halim, S.; Zulkepli, M.; Azwan, A. M.; Tang, W. L.; Chong, A. K.

    2015-10-01

    The extraction of true terrain points from unstructured laser point cloud data is an important process in order to produce an accurate digital terrain model (DTM). However, most of these spatial filtering methods just utilizing the geometrical data to discriminate the terrain points from nonterrain points. The point cloud filtering method also can be improved by using the spectral information available with some scanners. Therefore, the objective of this study is to investigate the effectiveness of using the three-channel (red, green and blue) of the colour image captured from built-in digital camera which is available in some Terrestrial Laser Scanner (TLS) for terrain extraction. In this study, the data acquisition was conducted at a mini replica landscape in Universiti Teknologi Malaysia (UTM), Skudai campus using Leica ScanStation C10. The spectral information of the coloured point clouds from selected sample classes are extracted for spectral analysis. The coloured point clouds which within the corresponding preset spectral threshold are identified as that specific feature point from the dataset. This process of terrain extraction is done through using developed Matlab coding. Result demonstrates that a higher spectral resolution passive image is required in order to improve the output. This is because low quality of the colour images captured by the sensor contributes to the low separability in spectral reflectance. In conclusion, this study shows that, spectral information is capable to be used as a parameter for terrain extraction.

  14. [Study on the modeling of earth-atmosphere coupling over rugged scenes for hyperspectral remote sensing].

    PubMed

    Zhao, Hui-Jie; Jiang, Cheng; Jia, Guo-Rui

    2014-01-01

    Adjacency effects may introduce errors in the quantitative applications of hyperspectral remote sensing, of which the significant item is the earth-atmosphere coupling radiance. However, the surrounding relief and shadow induce strong changes in hyperspectral images acquired from rugged terrain, which is not accurate to describe the spectral characteristics. Furthermore, the radiative coupling process between the earth and the atmosphere is more complex over the rugged scenes. In order to meet the requirements of real-time processing in data simulation, an equivalent reflectance of background was developed by taking into account the topography and the geometry between surroundings and targets based on the radiative transfer process. The contributions of the coupling to the signal at sensor level were then evaluated. This approach was integrated to the sensor-level radiance simulation model and then validated through simulating a set of actual radiance data. The results show that the visual effect of simulated images is consistent with that of observed images. It was also shown that the spectral similarity is improved over rugged scenes. In addition, the model precision is maintained at the same level over flat scenes.

  15. Compact LWIR sensors using spatial interferometric technology (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Bingham, Adam L.; Lucey, Paul G.; Knobbe, Edward T.

    2017-05-01

    Recent developments in reducing the cost and mass of hyperspectral sensors have enabled more widespread use for short range compositional imaging applications. HSI in the long wave infrared (LWIR) is of interest because it is sensitive to spectral phenomena not accessible to other wavelengths, and because of its inherent thermal imaging capability. At Spectrum Photonics we have pursued compact LWIR hyperspectral sensors both using microbolometer arrays and compact cryogenic detector cameras. Our microbolometer-based systems are principally aimed at short standoff applications, currently weigh 10-15 lbs and feature sizes approximately 20x20x10 cm, with sensitivity in the 1-2 microflick range, and imaging times on the order of 30 seconds. Our systems that employ cryogenic arrays are aimed at medium standoff ranges such as nadir looking missions from UAVs. Recent work with cooled sensors has focused on Strained Layer Superlattice (SLS) technology, as these detector arrays are undergoing rapid improvements, and have some advantages compared to HgCdTe detectors in terms of calibration stability. These sensors include full on-board processing sensor stabilization so are somewhat larger than the microbolometer systems, but could be adapted to much more compact form factors. We will review our recent progress in both these application areas.

  16. Atmospheric correction for remote sensing image based on multi-spectral information

    NASA Astrophysics Data System (ADS)

    Wang, Yu; He, Hongyan; Tan, Wei; Qi, Wenwen

    2018-03-01

    The light collected from remote sensors taken from space must transit through the Earth's atmosphere. All satellite images are affected at some level by lightwave scattering and absorption from aerosols, water vapor and particulates in the atmosphere. For generating high-quality scientific data, atmospheric correction is required to remove atmospheric effects and to convert digital number (DN) values to surface reflectance (SR). Every optical satellite in orbit observes the earth through the same atmosphere, but each satellite image is impacted differently because atmospheric conditions are constantly changing. A physics-based detailed radiative transfer model 6SV requires a lot of key ancillary information about the atmospheric conditions at the acquisition time. This paper investigates to achieve the simultaneous acquisition of atmospheric radiation parameters based on the multi-spectral information, in order to improve the estimates of surface reflectance through physics-based atmospheric correction. Ancillary information on the aerosol optical depth (AOD) and total water vapor (TWV) derived from the multi-spectral information based on specific spectral properties was used for the 6SV model. The experimentation was carried out on images of Sentinel-2, which carries a Multispectral Instrument (MSI), recording in 13 spectral bands, covering a wide range of wavelengths from 440 up to 2200 nm. The results suggest that per-pixel atmospheric correction through 6SV model, integrating AOD and TWV derived from multispectral information, is better suited for accurate analysis of satellite images and quantitative remote sensing application.

  17. HYPERSPECTRAL CHANNEL SELECTION FOR WATER QUALITY MONITORING ON THE GREAT MIAMI RIVER, OHIO

    EPA Science Inventory

    During the summer of 1999, spectral data were collected with a hand-held spectroradiometer, a laboratory spectrometer and airborne hyperspectral sensors from the Great Miami River (GMR), Ohio. Approximately 80 km of the GMR were imaged during a flyover with a Compact Airborne Sp...

  18. Using High-Resolution, Regional-Scale Data to Characterize Floating Aquatic Nuisance Vegetation in Coastal Louisiana Navigation Channels

    DTIC Science & Technology

    2014-01-01

    Comparison of footprints from various image sensors used in this study . Landsat (blue) is in the upper left panel, SPOT (yellow) is in the upper right...the higher resolution sensors evaluated as part of this study are limited to four spectral bands. Moderate resolution processing. ArcGIS ...moderate, effective useful coverage may be much more limited for a scene that includes significant amounts of water. Throughout the study period, SPOT 4

  19. Performance evaluation of object based greenhouse detection from Sentinel-2 MSI and Landsat 8 OLI data: A case study from Almería (Spain)

    NASA Astrophysics Data System (ADS)

    Novelli, Antonio; Aguilar, Manuel A.; Nemmaoui, Abderrahim; Aguilar, Fernando J.; Tarantino, Eufemia

    2016-10-01

    This paper shows the first comparison between data from Sentinel-2 (S2) Multi Spectral Instrument (MSI) and Landsat 8 (L8) Operational Land Imager (OLI) headed up to greenhouse detection. Two closely related in time scenes, one for each sensor, were classified by using Object Based Image Analysis and Random Forest (RF). The RF input consisted of several object-based features computed from spectral bands and including mean values, spectral indices and textural features. S2 and L8 data comparisons were also extended using a common segmentation dataset extracted form VHR World-View 2 (WV2) imagery to test differences only due to their specific spectral contribution. The best band combinations to perform segmentation were found through a modified version of the Euclidian Distance 2 index. Four different RF classifications schemes were considered achieving 89.1%, 91.3%, 90.9% and 93.4% as the best overall accuracies respectively, evaluated over the whole study area.

  20. UW Imaging of Seismic-Physical-Models in Air Using Fiber-Optic Fabry-Perot Interferometer.

    PubMed

    Rong, Qiangzhou; Hao, Yongxin; Zhou, Ruixiang; Yin, Xunli; Shao, Zhihua; Liang, Lei; Qiao, Xueguang

    2017-02-17

    A fiber-optic Fabry-Perot interferometer (FPI) has been proposed and demonstrated for the ultrasound wave (UW) imaging of seismic-physical models. The sensor probe comprises a single mode fiber (SMF) that is inserted into a ceramic tube terminated by an ultra-thin gold film. The probe performs with an excellent UW sensitivity thanks to the nanolayer gold film, and thus is capable of detecting a weak UW in air medium. Furthermore, the compact sensor is a symmetrical structure so that it presents a good directionality in the UW detection. The spectral band-side filter technique is used for UW interrogation. After scanning the models using the sensing probe in air, the two-dimensional (2D) images of four physical models are reconstructed.

  1. Denoising Algorithm for CFA Image Sensors Considering Inter-Channel Correlation.

    PubMed

    Lee, Min Seok; Park, Sang Wook; Kang, Moon Gi

    2017-05-28

    In this paper, a spatio-spectral-temporal filter considering an inter-channel correlation is proposed for the denoising of a color filter array (CFA) sequence acquired by CCD/CMOS image sensors. Owing to the alternating under-sampled grid of the CFA pattern, the inter-channel correlation must be considered in the direct denoising process. The proposed filter is applied in the spatial, spectral, and temporal domain, considering the spatio-tempo-spectral correlation. First, nonlocal means (NLM) spatial filtering with patch-based difference (PBD) refinement is performed by considering both the intra-channel correlation and inter-channel correlation to overcome the spatial resolution degradation occurring with the alternating under-sampled pattern. Second, a motion-compensated temporal filter that employs inter-channel correlated motion estimation and compensation is proposed to remove the noise in the temporal domain. Then, a motion adaptive detection value controls the ratio of the spatial filter and the temporal filter. The denoised CFA sequence can thus be obtained without motion artifacts. Experimental results for both simulated and real CFA sequences are presented with visual and numerical comparisons to several state-of-the-art denoising methods combined with a demosaicing method. Experimental results confirmed that the proposed frameworks outperformed the other techniques in terms of the objective criteria and subjective visual perception in CFA sequences.

  2. Multi-color IR sensors based on QWIP technology for security and surveillance applications

    NASA Astrophysics Data System (ADS)

    Sundaram, Mani; Reisinger, Axel; Dennis, Richard; Patnaude, Kelly; Burrows, Douglas; Cook, Robert; Bundas, Jason

    2006-05-01

    Room-temperature targets are detected at the furthest distance by imaging them in the long wavelength (LW: 8-12 μm) infrared spectral band where they glow brightest. Focal plane arrays (FPAs) based on quantum well infrared photodetectors (QWIPs) have sensitivity, noise, and cost metrics that have enabled them to become the best commercial solution for certain security and surveillance applications. Recently, QWIP technology has advanced to provide pixelregistered dual-band imaging in both the midwave (MW: 3-5 μm) and longwave infrared spectral bands in a single chip. This elegant technology affords a degree of target discrimination as well as the ability to maximize detection range for hot targets (e.g. missile plumes) by imaging in the midwave and for room-temperature targets (e.g. humans, trucks) by imaging in the longwave with one simple camera. Detection-range calculations are illustrated and FPA performance is presented.

  3. Ultrafast Imaging using Spectral Resonance Modulation

    NASA Astrophysics Data System (ADS)

    Huang, Eric; Ma, Qian; Liu, Zhaowei

    2016-04-01

    CCD cameras are ubiquitous in research labs, industry, and hospitals for a huge variety of applications, but there are many dynamic processes in nature that unfold too quickly to be captured. Although tradeoffs can be made between exposure time, sensitivity, and area of interest, ultimately the speed limit of a CCD camera is constrained by the electronic readout rate of the sensors. One potential way to improve the imaging speed is with compressive sensing (CS), a technique that allows for a reduction in the number of measurements needed to record an image. However, most CS imaging methods require spatial light modulators (SLMs), which are subject to mechanical speed limitations. Here, we demonstrate an etalon array based SLM without any moving elements that is unconstrained by either mechanical or electronic speed limitations. This novel spectral resonance modulator (SRM) shows great potential in an ultrafast compressive single pixel camera.

  4. Hybrid imaging: a quantum leap in scientific imaging

    NASA Astrophysics Data System (ADS)

    Atlas, Gene; Wadsworth, Mark V.

    2004-01-01

    ImagerLabs has advanced its patented next generation imaging technology called the Hybrid Imaging Technology (HIT) that offers scientific quality performance. The key to the HIT is the merging of the CCD and CMOS technologies through hybridization rather than process integration. HIT offers exceptional QE, fill factor, broad spectral response and very low noise properties of the CCD. In addition, it provides the very high-speed readout, low power, high linearity and high integration capability of CMOS sensors. In this work, we present the benefits, and update the latest advances in the performance of this exciting technology.

  5. IR CMOS: near infrared enhanced digital imaging (Presentation Recording)

    NASA Astrophysics Data System (ADS)

    Pralle, Martin U.; Carey, James E.; Joy, Thomas; Vineis, Chris J.; Palsule, Chintamani

    2015-08-01

    SiOnyx has demonstrated imaging at light levels below 1 mLux (moonless starlight) at video frame rates with a 720P CMOS image sensor in a compact, low latency camera. Low light imaging is enabled by the combination of enhanced quantum efficiency in the near infrared together with state of the art low noise image sensor design. The quantum efficiency enhancements are achieved by applying Black Silicon, SiOnyx's proprietary ultrafast laser semiconductor processing technology. In the near infrared, silicon's native indirect bandgap results in low absorption coefficients and long absorption lengths. The Black Silicon nanostructured layer fundamentally disrupts this paradigm by enhancing the absorption of light within a thin pixel layer making 5 microns of silicon equivalent to over 300 microns of standard silicon. This results in a demonstrate 10 fold improvements in near infrared sensitivity over incumbent imaging technology while maintaining complete compatibility with standard CMOS image sensor process flows. Applications include surveillance, nightvision, and 1064nm laser see spot. Imaging performance metrics will be discussed. Demonstrated performance characteristics: Pixel size : 5.6 and 10 um Array size: 720P/1.3Mpix Frame rate: 60 Hz Read noise: 2 ele/pixel Spectral sensitivity: 400 to 1200 nm (with 10x QE at 1064nm) Daytime imaging: color (Bayer pattern) Nighttime imaging: moonless starlight conditions 1064nm laser imaging: daytime imaging out to 2Km

  6. Demonstration of the CDMA-mode CAOS smart camera.

    PubMed

    Riza, Nabeel A; Mazhar, Mohsin A

    2017-12-11

    Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.

  7. Scaling dimensions in spectroscopy of soil and vegetation

    NASA Astrophysics Data System (ADS)

    Malenovský, Zbyněk; Bartholomeus, Harm M.; Acerbi-Junior, Fausto W.; Schopfer, Jürg T.; Painter, Thomas H.; Epema, Gerrit F.; Bregt, Arnold K.

    2007-05-01

    The paper revises and clarifies definitions of the term scale and scaling conversions for imaging spectroscopy of soil and vegetation. We demonstrate a new four-dimensional scale concept that includes not only spatial but also the spectral, directional and temporal components. Three scaling remote sensing techniques are reviewed: (1) radiative transfer, (2) spectral (un)mixing, and (3) data fusion. Relevant case studies are given in the context of their up- and/or down-scaling abilities over the soil/vegetation surfaces and a multi-source approach is proposed for their integration. Radiative transfer (RT) models are described to show their capacity for spatial, spectral up-scaling, and directional down-scaling within a heterogeneous environment. Spectral information and spectral derivatives, like vegetation indices (e.g. TCARI/OSAVI), can be scaled and even tested by their means. Radiative transfer of an experimental Norway spruce ( Picea abies (L.) Karst.) research plot in the Czech Republic was simulated by the Discrete Anisotropic Radiative Transfer (DART) model to prove relevance of the correct object optical properties scaled up to image data at two different spatial resolutions. Interconnection of the successive modelling levels in vegetation is shown. A future development in measurement and simulation of the leaf directional spectral properties is discussed. We describe linear and/or non-linear spectral mixing techniques and unmixing methods that demonstrate spatial down-scaling. Relevance of proper selection or acquisition of the spectral endmembers using spectral libraries, field measurements, and pure pixels of the hyperspectral image is highlighted. An extensive list of advanced unmixing techniques, a particular example of unmixing a reflective optics system imaging spectrometer (ROSIS) image from Spain, and examples of other mixture applications give insight into the present status of scaling capabilities. Simultaneous spatial and temporal down-scaling by means of a data fusion technique is described. A demonstrative example is given for the moderate resolution imaging spectroradiometer (MODIS) and LANDSAT Thematic Mapper (TM) data from Brazil. Corresponding spectral bands of both sensors were fused via a pyramidal wavelet transform in Fourier space. New spectral and temporal information of the resultant image can be used for thematic classification or qualitative mapping. All three described scaling techniques can be integrated as the relevant methodological steps within a complex multi-source approach. We present this concept of combining numerous optical remote sensing data and methods to generate inputs for ecosystem process models.

  8. Crosstalk quantification, analysis, and trends in CMOS image sensors.

    PubMed

    Blockstein, Lior; Yadid-Pecht, Orly

    2010-08-20

    Pixel crosstalk (CTK) consists of three components, optical CTK (OCTK), electrical CTK (ECTK), and spectral CTK (SCTK). The CTK has been classified into two groups: pixel-architecture dependent and pixel-architecture independent. The pixel-architecture-dependent CTK (PADC) consists of the sum of two CTK components, i.e., the OCTK and the ECTK. This work presents a short summary of a large variety of methods for PADC reduction. Following that, this work suggests a clear quantifiable definition of PADC. Three complementary metal-oxide-semiconductor (CMOS) image sensors based on different technologies were empirically measured, using a unique scanning technology, the S-cube. The PADC is analyzed, and technology trends are shown.

  9. A New Pansharpening Method Based on Spatial and Spectral Sparsity Priors.

    PubMed

    He, Xiyan; Condat, Laurent; Bioucas-Diaz, Jose; Chanussot, Jocelyn; Xia, Junshi

    2014-06-27

    The development of multisensor systems in recent years has led to great increase in the amount of available remote sensing data. Image fusion techniques aim at inferring high quality images of a given area from degraded versions of the same area obtained by multiple sensors. This paper focuses on pansharpening, which is the inference of a high spatial resolution multispectral image from two degraded versions with complementary spectral and spatial resolution characteristics: a) a low spatial resolution multispectral image; and b) a high spatial resolution panchromatic image. We introduce a new variational model based on spatial and spectral sparsity priors for the fusion. In the spectral domain we encourage low-rank structure, whereas in the spatial domain we promote sparsity on the local differences. Given the fact that both panchromatic and multispectral images are integrations of the underlying continuous spectra using different channel responses, we propose to exploit appropriate regularizations based on both spatial and spectral links between panchromatic and the fused multispectral images. A weighted version of the vector Total Variation (TV) norm of the data matrix is employed to align the spatial information of the fused image with that of the panchromatic image. With regard to spectral information, two different types of regularization are proposed to promote a soft constraint on the linear dependence between the panchromatic and the fused multispectral images. The first one estimates directly the linear coefficients from the observed panchromatic and low resolution multispectral images by Linear Regression (LR) while the second one employs the Principal Component Pursuit (PCP) to obtain a robust recovery of the underlying low-rank structure. We also show that the two regularizers are strongly related. The basic idea of both regularizers is that the fused image should have low-rank and preserve edge locations. We use a variation of the recently proposed Split Augmented Lagrangian Shrinkage (SALSA) algorithm to effectively solve the proposed variational formulations. Experimental results on simulated and real remote sensing images show the effectiveness of the proposed pansharpening method compared to the state-of-the-art.

  10. Degradation of CMOS image sensors in deep-submicron technology due to γ-irradiation

    NASA Astrophysics Data System (ADS)

    Rao, Padmakumar R.; Wang, Xinyang; Theuwissen, Albert J. P.

    2008-09-01

    In this work, radiation induced damage mechanisms in deep submicron technology is resolved using finger gated-diodes (FGDs) as a radiation sensitive tool. It is found that these structures are simple yet efficient structures to resolve radiation induced damage in advanced CMOS processes. The degradation of the CMOS image sensors in deep-submicron technology due to γ-ray irradiation is studied by developing a model for the spectral response of the sensor and also by the dark-signal degradation as a function of STI (shallow-trench isolation) parameters. It is found that threshold shifts in the gate-oxide/silicon interface as well as minority carrier life-time variations in the silicon bulk are minimal. The top-layer material properties and the photodiode Si-SiO2 interface quality are degraded due to γ-ray irradiation. Results further suggest that p-well passivated structures are inevitable for radiation-hard designs. It was found that high electrical fields in submicron technologies pose a threat to high quality imaging in harsh environments.

  11. Atmospheric correction for JPSS-2 VIIRS response versus scan angle measurements

    NASA Astrophysics Data System (ADS)

    McIntire, Jeffrey; Moeller, Chris; Oudrari, Hassan; Xiong, Xiaoxiong

    2017-09-01

    The Joint Polar Satellite System 2 (JPSS-2) Visible Infrared Imaging Radiometer Suite (VIIRS) includes one spectral band centered in a strong atmospheric absorption region. As much of the pre-launch calibration is performed under laboratory ambient conditions, accurately accounting for the absorption, and thereby ensuring the transfer of the sensor calibration to on-orbit operations, is necessary to generate science quality data products. This work is focused on the response versus scan angle (RVS) measurements, which characterize the relative scan angle dependent reflectance of the JPSS-2 VIIRS instrument optics. The spectral band of interest, centered around 1378 nm, is within a spectral region strongly effected by water vapor absorption. The methodology used to model the absolute humidity and the atmospheric transmittance under the laboratory conditions is detailed. The application of this transmittance to the RVS determination is then described including an uncertainty estimate; a comparison to the pre-launch measurements from earlier sensor builds is also performed.

  12. Preliminary Geologic/spectral Analysis of LANDSAT-4 Thematic Mapper Data, Wind River/bighorn Basin Area, Wyoming

    NASA Technical Reports Server (NTRS)

    Lang, H. R.; Conel, J. E.; Paylor, E. D.

    1984-01-01

    A LIDQA evaluation for geologic applications of a LANDSAT TM scene covering the Wind River/Bighorn Basin area, Wyoming, is examined. This involves a quantitative assessment of data quality including spatial and spectral characteristics. Analysis is concentrated on the 6 visible, near infrared, and short wavelength infrared bands. Preliminary analysis demonstrates that: (1) principal component images derived from the correlation matrix provide the most useful geologic information. To extract surface spectral reflectance, the TM radiance data must be calibrated. Scatterplots demonstrate that TM data can be calibrated and sensor response is essentially linear. Low instrumental offset and gain settings result in spectral data that do not utilize the full dynamic range of the TM system.

  13. Evaluation of appropriate sensor specifications for space based ballistic missile detection

    NASA Astrophysics Data System (ADS)

    Schweitzer, Caroline; Stein, Karin; Wendelstein, Norbert

    2012-10-01

    The detection and tracking of ballistic missiles (BMs) during launch or cloud break using satellite based electro-optical (EO) sensors is a promising possibility for pre-instructing early warning and fire control radars. However, the successful detection of a BM is depending on the applied infrared (IR)-channel, as emission and reflection of threat and background vary in different spectral (IR-) bands and for different observation scenarios. In addition, the spatial resolution of the satellite based system also conditions the signal-to-clutter-ratio (SCR) and therefore the predictability of the flight path. Generally available satellite images provide data in spectral bands, which are suitable for remote sensing applications and earth surface observations. However, in the fields of BM early warning, these bands are not of interest making the simulation of background data essential. The paper focuses on the analysis of IR-bands suitable for missile detection by trading off the suppression of background signature against threat signal strength. This comprises a radiometric overview of the background radiation in different spectral bands for different climates and seasons as well as for various cloud types and covers. A brief investigation of the BM signature and its trajectory within a threat scenario is presented. Moreover, the influence on the SCR caused by different observation scenarios and varying spatial resolution are pointed out. The paper also introduces the software used for simulating natural background spectral radiance images, MATISSE ("Advanced Modeling of the Earth for Environment and Scenes Simulation") by ONERA [1].

  14. ROI-Based On-Board Compression for Hyperspectral Remote Sensing Images on GPU.

    PubMed

    Giordano, Rossella; Guccione, Pietro

    2017-05-19

    In recent years, hyperspectral sensors for Earth remote sensing have become very popular. Such systems are able to provide the user with images having both spectral and spatial information. The current hyperspectral spaceborne sensors are able to capture large areas with increased spatial and spectral resolution. For this reason, the volume of acquired data needs to be reduced on board in order to avoid a low orbital duty cycle due to limited storage space. Recently, literature has focused the attention on efficient ways for on-board data compression. This topic is a challenging task due to the difficult environment (outer space) and due to the limited time, power and computing resources. Often, the hardware properties of Graphic Processing Units (GPU) have been adopted to reduce the processing time using parallel computing. The current work proposes a framework for on-board operation on a GPU, using NVIDIA's CUDA (Compute Unified Device Architecture) architecture. The algorithm aims at performing on-board compression using the target's related strategy. In detail, the main operations are: the automatic recognition of land cover types or detection of events in near real time in regions of interest (this is a user related choice) with an unsupervised classifier; the compression of specific regions with space-variant different bit rates including Principal Component Analysis (PCA), wavelet and arithmetic coding; and data volume management to the Ground Station. Experiments are provided using a real dataset taken from an AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) airborne sensor in a harbor area.

  15. Effects of spectrometer band pass, sampling, and signal-to-noise ratio on spectral identification using the Tetracorder algorithm

    USGS Publications Warehouse

    Swayze, G.A.; Clark, R.N.; Goetz, A.F.H.; Chrien, T.H.; Gorelick, N.S.

    2003-01-01

    Estimates of spectrometer band pass, sampling interval, and signal-to-noise ratio required for identification of pure minerals and plants were derived using reflectance spectra convolved to AVIRIS, HYDICE, MIVIS, VIMS, and other imaging spectrometers. For each spectral simulation, various levels of random noise were added to the reflectance spectra after convolution, and then each was analyzed with the Tetracorder spectra identification algorithm [Clark et al., 2003]. The outcome of each identification attempt was tabulated to provide an estimate of the signal-to-noise ratio at which a given percentage of the noisy spectra were identified correctly. Results show that spectral identification is most sensitive to the signal-to-noise ratio at narrow sampling interval values but is more sensitive to the sampling interval itself at broad sampling interval values because of spectral aliasing, a condition when absorption features of different materials can resemble one another. The band pass is less critical to spectral identification than the sampling interval or signal-to-noise ratio because broadening the band pass does not induce spectral aliasing. These conclusions are empirically corroborated by analysis of mineral maps of AVIRIS data collected at Cuprite, Nevada, between 1990 and 1995, a period during which the sensor signal-to-noise ratio increased up to sixfold. There are values of spectrometer sampling and band pass beyond which spectral identification of materials will require an abrupt increase in sensor signal-to-noise ratio due to the effects of spectral aliasing. Factors that control this threshold are the uniqueness of a material's diagnostic absorptions in terms of shape and wavelength isolation, and the spectral diversity of the materials found in nature and in the spectral library used for comparison. Array spectrometers provide the best data for identification when they critically sample spectra. The sampling interval should not be broadened to increase the signal-to-noise ratio in a photon-noise-limited system when high levels of accuracy are desired. It is possible, using this simulation method, to select optimum combinations of band-pass, sampling interval, and signal-to-noise ratio values for a particular application that maximize identification accuracy and minimize the volume of imaging data.

  16. Anti-Hermitian photodetector facilitating efficient subwavelength photon sorting.

    PubMed

    Kim, Soo Jin; Kang, Ju-Hyung; Mutlu, Mehmet; Park, Joonsuk; Park, Woosung; Goodson, Kenneth E; Sinclair, Robert; Fan, Shanhui; Kik, Pieter G; Brongersma, Mark L

    2018-01-22

    The ability to split an incident light beam into separate wavelength bands is central to a diverse set of optical applications, including imaging, biosensing, communication, photocatalysis, and photovoltaics. Entirely new opportunities are currently emerging with the recently demonstrated possibility to spectrally split light at a subwavelength scale with optical antennas. Unfortunately, such small structures offer limited spectral control and are hard to exploit in optoelectronic devices. Here, we overcome both challenges and demonstrate how within a single-layer metafilm one can laterally sort photons of different wavelengths below the free-space diffraction limit and extract a useful photocurrent. This chipscale demonstration of anti-Hermitian coupling between resonant photodetector elements also facilitates near-unity photon-sorting efficiencies, near-unity absorption, and a narrow spectral response (∼ 30 nm) for the different wavelength channels. This work opens up entirely new design paradigms for image sensors and energy harvesting systems in which the active elements both sort and detect photons.

  17. Using a trichromatic CCD camera for spectral skylight estimation.

    PubMed

    López-Alvarez, Miguel A; Hernández-Andrés, Javier; Romero, Javier; Olmo, F J; Cazorla, A; Alados-Arboledas, L

    2008-12-01

    In a previous work [J. Opt. Soc. Am. A 24, 942-956 (2007)] we showed how to design an optimum multispectral system aimed at spectral recovery of skylight. Since high-resolution multispectral images of skylight could be interesting for many scientific disciplines, here we also propose a nonoptimum but much cheaper and faster approach to achieve this goal by using a trichromatic RGB charge-coupled device (CCD) digital camera. The camera is attached to a fish-eye lens, hence permitting us to obtain a spectrum of every point of the skydome corresponding to each pixel of the image. In this work we show how to apply multispectral techniques to the sensors' responses of a common trichromatic camera in order to obtain skylight spectra from them. This spectral information is accurate enough to estimate experimental values of some climate parameters or to be used in algorithms for automatic cloud detection, among many other possible scientific applications.

  18. Multispectral interference filter arrays with compensation of angular dependence or extended spectral range.

    PubMed

    Frey, Laurent; Masarotto, Lilian; Armand, Marilyn; Charles, Marie-Lyne; Lartigue, Olivier

    2015-05-04

    Thin film Fabry-Perot filter arrays with high selectivity can be realized with a single patterning step, generating a spatial modulation of the effective refractive index in the optical cavity. In this paper, we investigate the ability of this technology to address two applications in the field of image sensors. First, the spectral tuning may be used to compensate the blue-shift of the filters in oblique incidence, provided the filter array is located in an image plane of an optical system with higher field of view than aperture angle. The technique is analyzed for various types of filters and experimental evidence is shown with copper-dielectric infrared filters. Then, we propose a design of a multispectral filter array with an extended spectral range spanning the visible and near-infrared range, using a single set of materials and realizable on a single substrate.

  19. Interference data correction methods for lunar observation with a large-aperture static imaging spectrometer.

    PubMed

    Zhang, Geng; Wang, Shuang; Li, Libo; Hu, Xiuqing; Hu, Bingliang

    2016-11-01

    The lunar spectrum has been used in radiometric calibration and sensor stability monitoring for spaceborne optical sensors. A ground-based large-aperture static image spectrometer (LASIS) can be used to acquire the lunar spectral image for lunar radiance model improvement when the moon orbits over its viewing field. The lunar orbiting behavior is not consistent with the desired scanning speed and direction of LASIS. To correctly extract interferograms from the obtained data, a translation correction method based on image correlation is proposed. This method registers the frames to a reference frame to reduce accumulative errors. Furthermore, we propose a circle-matching-based approach to achieve even higher accuracy during observation of the full moon. To demonstrate the effectiveness of our approaches, experiments are run on true lunar observation data. The results show that the proposed approaches outperform the state-of-the-art methods.

  20. Improved classification accuracy of powdery mildew infection levels of wine grapes by spatial-spectral analysis of hyperspectral images.

    PubMed

    Knauer, Uwe; Matros, Andrea; Petrovic, Tijana; Zanker, Timothy; Scott, Eileen S; Seiffert, Udo

    2017-01-01

    Hyperspectral imaging is an emerging means of assessing plant vitality, stress parameters, nutrition status, and diseases. Extraction of target values from the high-dimensional datasets either relies on pixel-wise processing of the full spectral information, appropriate selection of individual bands, or calculation of spectral indices. Limitations of such approaches are reduced classification accuracy, reduced robustness due to spatial variation of the spectral information across the surface of the objects measured as well as a loss of information intrinsic to band selection and use of spectral indices. In this paper we present an improved spatial-spectral segmentation approach for the analysis of hyperspectral imaging data and its application for the prediction of powdery mildew infection levels (disease severity) of intact Chardonnay grape bunches shortly before veraison. Instead of calculating texture features (spatial features) for the huge number of spectral bands independently, dimensionality reduction by means of Linear Discriminant Analysis (LDA) was applied first to derive a few descriptive image bands. Subsequent classification was based on modified Random Forest classifiers and selective extraction of texture parameters from the integral image representation of the image bands generated. Dimensionality reduction, integral images, and the selective feature extraction led to improved classification accuracies of up to [Formula: see text] for detached berries used as a reference sample (training dataset). Our approach was validated by predicting infection levels for a sample of 30 intact bunches. Classification accuracy improved with the number of decision trees of the Random Forest classifier. These results corresponded with qPCR results. An accuracy of 0.87 was achieved in classification of healthy, infected, and severely diseased bunches. However, discrimination between visually healthy and infected bunches proved to be challenging for a few samples, perhaps due to colonized berries or sparse mycelia hidden within the bunch or airborne conidia on the berries that were detected by qPCR. An advanced approach to hyperspectral image classification based on combined spatial and spectral image features, potentially applicable to many available hyperspectral sensor technologies, has been developed and validated to improve the detection of powdery mildew infection levels of Chardonnay grape bunches. The spatial-spectral approach improved especially the detection of light infection levels compared with pixel-wise spectral data analysis. This approach is expected to improve the speed and accuracy of disease detection once the thresholds for fungal biomass detected by hyperspectral imaging are established; it can also facilitate monitoring in plant phenotyping of grapevine and additional crops.

  1. Remote spectral measurements of the blood volume pulse with applications for imaging photoplethysmography

    NASA Astrophysics Data System (ADS)

    Blackford, Ethan B.; Estepp, Justin R.; McDuff, Daniel J.

    2018-02-01

    Imaging photoplethysmography uses camera image sensors to measure variations in light absorption related to the delivery of the blood volume pulse to peripheral tissues. The characteristics of the measured BVP waveform depends on the spectral absorption of various tissue components including melanin, hemoglobin, water, and yellow pigments. Signal quality and artifact rejection can be enhanced by taking into account the spectral properties of the BVP waveform and surrounding tissue. The current literature regarding the spectral relationships of remote PPG is limited. To supplement this fundamental data, we present an analysis of remotely-measured, visible and near-infrared spectroscopy to better understand the spectral signature of remotely measured BVP signals. To do so, spectra were measured from the right cheek of 25, stationary participants whose heads were stabilized by a chinrest. A collimating lens was used to collect reflected light from a region of 3 cm in diameter. The spectrometer provided 3 nm resolution measurements from 500-1000 nm. Measurements were acquired at a rate of 50 complete spectra per second for a period of five minutes. Reference physiology, including electrocardiography was simultaneously and synchronously acquired. The spectral data were analyzed to determine the relationship between light wavelength and the resulting remote-BVP signal-to-noise ratio and to identify those bands best suited for pulse rate measurement. To our knowledge this is the most comprehensive dataset of remotely-measured spectral iPPG data. In due course, we plan to release this dataset for research purposes.

  2. Determining fast orientation changes of multi-spectral line cameras from the primary images

    NASA Astrophysics Data System (ADS)

    Wohlfeil, Jürgen

    2012-01-01

    Fast orientation changes of airborne and spaceborne line cameras cannot always be avoided. In such cases it is essential to measure them with high accuracy to ensure a good quality of the resulting imagery products. Several approaches exist to support the orientation measurement by using optical information received through the main objective/telescope. In this article an approach is proposed that allows the determination of non-systematic orientation changes between every captured line. It does not require any additional camera hardware or onboard processing capabilities but the payload images and a rough estimate of the camera's trajectory. The approach takes advantage of the typical geometry of multi-spectral line cameras with a set of linear sensor arrays for different spectral bands on the focal plane. First, homologous points are detected within the heavily distorted images of different spectral bands. With their help a connected network of geometrical correspondences can be built up. This network is used to calculate the orientation changes of the camera with the temporal and angular resolution of the camera. The approach was tested with an extensive set of aerial surveys covering a wide range of different conditions and achieved precise and reliable results.

  3. Mapping canopy defoliation by herbivorous insects at the individual tree level using bi-temporal airborne imaging spectroscopy and LiDAR measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Ran; Dennison, Philip E.; Zhao, Feng

    Defoliation by herbivorous insects is a widespread forest disturbance driver, affecting global forest health and ecosystem dynamics. Additionally, compared with time- and labor-intensive field surveys, remote sensing provides the only realistic approach to mapping canopy defoliation by herbivorous insects over large spatial and temporal scales. However, the spectral and structural signatures of defoliation by insects at the individual tree level have not been well studied. Additionally, the predictive power of spectral and structural metrics for mapping canopy defoliation has seldom been compared. These critical knowledge gaps prevent us from consistently detecting and mapping canopy defoliation by herbivorous insects across multiplemore » scales. During the peak of a gypsy moth outbreak in Long Island, New York in summer 2016, we leveraged bi-temporal airborne imaging spectroscopy (IS, i.e., hyperspectral imaging) and LiDAR measurements at 1m spatial resolution to explore the spectral and structural signatures of canopy defoliation in a mixed oak-pine forest. We determined that red edge and near-infrared spectral regions within the IS data were most sensitive to crown-scale defoliation severity. LiDAR measurements including B70 (i.e., 70th bincentile height), intensity skewness, and kurtosis were effectively able to detect structural changes caused by herbivorous insects. In addition to canopy leaf loss, increased exposure of understory and non-photosynthetic materials contributed to the detected spectral and structural signatures. Comparing the ability of individual sensors to map canopy defoliation, the LiDAR-only Ordinary Least-Square (OLS) model performed better than the IS-only model (Adj. R-squared = 0.77, RMSE = 15.37% vs. Adj. R- squared = 0.63, RMSE = 19.11%). The IS+LiDAR model improved on performance of the individual sensors (Adj. R-squared = 0.81, RMSE = 14.46%). Our study improves our understanding of spectral and structural signatures of defoliation by herbivorous insects and presents a novel approach for mapping insect defoliation at the individual tree level. Furthermore, with the current and next generation of spaceborne sensors (e.g., WorldView-3, Landsat, Sentinel-2, HyspIRI, and GEDI), higher accuracy and frequent monitoring of insect defoliation may become more feasible across a range of spatial scales, which are critical for ecological research and management of forest resources including the economic consequences of forest insect infestations (e.g., reduced growth and increased mortality), as well as for informing and testing of carbon cycle models.« less

  4. Mapping canopy defoliation by herbivorous insects at the individual tree level using bi-temporal airborne imaging spectroscopy and LiDAR measurements

    DOE PAGES

    Meng, Ran; Dennison, Philip E.; Zhao, Feng; ...

    2018-06-19

    Defoliation by herbivorous insects is a widespread forest disturbance driver, affecting global forest health and ecosystem dynamics. Additionally, compared with time- and labor-intensive field surveys, remote sensing provides the only realistic approach to mapping canopy defoliation by herbivorous insects over large spatial and temporal scales. However, the spectral and structural signatures of defoliation by insects at the individual tree level have not been well studied. Additionally, the predictive power of spectral and structural metrics for mapping canopy defoliation has seldom been compared. These critical knowledge gaps prevent us from consistently detecting and mapping canopy defoliation by herbivorous insects across multiplemore » scales. During the peak of a gypsy moth outbreak in Long Island, New York in summer 2016, we leveraged bi-temporal airborne imaging spectroscopy (IS, i.e., hyperspectral imaging) and LiDAR measurements at 1m spatial resolution to explore the spectral and structural signatures of canopy defoliation in a mixed oak-pine forest. We determined that red edge and near-infrared spectral regions within the IS data were most sensitive to crown-scale defoliation severity. LiDAR measurements including B70 (i.e., 70th bincentile height), intensity skewness, and kurtosis were effectively able to detect structural changes caused by herbivorous insects. In addition to canopy leaf loss, increased exposure of understory and non-photosynthetic materials contributed to the detected spectral and structural signatures. Comparing the ability of individual sensors to map canopy defoliation, the LiDAR-only Ordinary Least-Square (OLS) model performed better than the IS-only model (Adj. R-squared = 0.77, RMSE = 15.37% vs. Adj. R- squared = 0.63, RMSE = 19.11%). The IS+LiDAR model improved on performance of the individual sensors (Adj. R-squared = 0.81, RMSE = 14.46%). Our study improves our understanding of spectral and structural signatures of defoliation by herbivorous insects and presents a novel approach for mapping insect defoliation at the individual tree level. Furthermore, with the current and next generation of spaceborne sensors (e.g., WorldView-3, Landsat, Sentinel-2, HyspIRI, and GEDI), higher accuracy and frequent monitoring of insect defoliation may become more feasible across a range of spatial scales, which are critical for ecological research and management of forest resources including the economic consequences of forest insect infestations (e.g., reduced growth and increased mortality), as well as for informing and testing of carbon cycle models.« less

  5. Understanding the spatial distribution of eroded areas in the former rural homelands of South Africa: Comparative evidence from two new non-commercial multispectral sensors

    NASA Astrophysics Data System (ADS)

    Sepuru, Terrence Koena; Dube, Timothy

    2018-07-01

    In this study, we determine the most suitable multispectral sensor that can accurately detect and map eroded areas from other land cover types in Sekhukhune rural district, Limpopo Province, South Africa. Specifically, the study tested the ability of multi-date (wet and dry season) Landsat 8 OLI and Sentinel-2 MSI images in detecting and mapping eroded areas. The implementation was done, using a robust non-parametric classification ensemble: Discriminant Analysis (DA). Three sets of analysis were applied (Analysis 1: Spectral bands as independent dataset; Analysis 2: Spectral vegetation indices as independent and Analysis 3: Combined spectral bands and spectral vegetation indices). Overall classification accuracies ranging between 80% to 81.90% for MSI and 75.71%-80.95% for OLI were derived for the wet and dry season, respectively. The integration of spectral bands and spectral vegetation indices showed that Sentinel-2 (OA = 83, 81%), slightly performed better than Landsat 8, with 82, 86%. The use of bands and vegetation indices as independent dataset resulted in slightly weaker results for both sensors. Sentinel-2 MSI bands located in the NIR (0.785-0.900 μm), red edge (0.698-0.785 μm) and SWIR (1.565-2.280 μm) regions were selected as the most optimal for discriminating degraded soils from other land cover types. However, for Landsat 8OLI, only the SWIR (1.560-2.300 μm), NIR (0.845-0.885 μm) region were selected as the best regions. Of the eighteen spectral vegetation indices computed, NDVI and SAVI and SAVI and Global Environmental Monitoring Index (GEMI) were ranked selected as the most suitable for detecting and mapping soil erosion. Additionally, SRTM DEM derived information illustrates that for both sensors eroded areas occur on sites that are 600 m and 900 m of altitude with similar trends observed in both dry and wet season maps. Findings of this work emphasize the importance of free and readily available new generation sensors in continuous landscape-scale soil erosion monitoring. Besides, such information can help to identify hotspots and potentially vulnerable areas, as well as aid in developing possible control and mitigation measures.

  6. Direct Detection Electron Energy-Loss Spectroscopy: A Method to Push the Limits of Resolution and Sensitivity.

    PubMed

    Hart, James L; Lang, Andrew C; Leff, Asher C; Longo, Paolo; Trevor, Colin; Twesten, Ray D; Taheri, Mitra L

    2017-08-15

    In many cases, electron counting with direct detection sensors offers improved resolution, lower noise, and higher pixel density compared to conventional, indirect detection sensors for electron microscopy applications. Direct detection technology has previously been utilized, with great success, for imaging and diffraction, but potential advantages for spectroscopy remain unexplored. Here we compare the performance of a direct detection sensor operated in counting mode and an indirect detection sensor (scintillator/fiber-optic/CCD) for electron energy-loss spectroscopy. Clear improvements in measured detective quantum efficiency and combined energy resolution/energy field-of-view are offered by counting mode direct detection, showing promise for efficient spectrum imaging, low-dose mapping of beam-sensitive specimens, trace element analysis, and time-resolved spectroscopy. Despite the limited counting rate imposed by the readout electronics, we show that both core-loss and low-loss spectral acquisition are practical. These developments will benefit biologists, chemists, physicists, and materials scientists alike.

  7. JPSS-1 VIIRS Pre-Launch Radiometric Performance

    NASA Technical Reports Server (NTRS)

    Oudrari, Hassan; McIntire, Jeff; Xiong, Xiaoxiong; Butler, James; Efremova, Boryana; Ji, Jack; Lee, Shihyan; Schwarting, Tom

    2015-01-01

    The Visible Infrared Imaging Radiometer Suite (VIIRS) on-board the first Joint Polar Satellite System (JPSS) completed its sensor level testing on December 2014. The JPSS-1 (J1) mission is scheduled to launch in December 2016, and will be very similar to the Suomi-National Polar-orbiting Partnership (SNPP) mission. VIIRS instrument was designed to provide measurements of the globe twice daily. It is a wide-swath (3,040 kilometers) cross-track scanning radiometer with spatial resolutions of 370 and 740 meters at nadir for imaging and moderate bands, respectively. It covers the wavelength spectrum from reflective to long-wave infrared through 22 spectral bands [0.412 microns to 12.01 microns]. VIIRS observations are used to generate 22 environmental data products (EDRs). This paper will briefly describe J1 VIIRS characterization and calibration performance and methodologies executed during the pre-launch testing phases by the independent government team, to generate the at-launch baseline radiometric performance, and the metrics needed to populate the sensor data record (SDR) Look-Up-Tables (LUTs). This paper will also provide an assessment of the sensor pre-launch radiometric performance, such as the sensor signal to noise ratios (SNRs), dynamic range, reflective and emissive bands calibration performance, polarization sensitivity, bands spectral performance, response-vs-scan (RVS), near field and stray light responses. A set of performance metrics generated during the pre-launch testing program will be compared to the SNPP VIIRS pre-launch performance.

  8. Autonomous collection of dynamically-cued multi-sensor imagery

    NASA Astrophysics Data System (ADS)

    Daniel, Brian; Wilson, Michael L.; Edelberg, Jason; Jensen, Mark; Johnson, Troy; Anderson, Scott

    2011-05-01

    The availability of imagery simultaneously collected from sensors of disparate modalities enhances an image analyst's situational awareness and expands the overall detection capability to a larger array of target classes. Dynamic cooperation between sensors is increasingly important for the collection of coincident data from multiple sensors either on the same or on different platforms suitable for UAV deployment. Of particular interest is autonomous collaboration between wide area survey detection, high-resolution inspection, and RF sensors that span large segments of the electromagnetic spectrum. The Naval Research Laboratory (NRL) in conjunction with the Space Dynamics Laboratory (SDL) is building sensors with such networked communications capability and is conducting field tests to demonstrate the feasibility of collaborative sensor data collection and exploitation. Example survey / detection sensors include: NuSAR (NRL Unmanned SAR), a UAV compatible synthetic aperture radar system; microHSI, an NRL developed lightweight hyper-spectral imager; RASAR (Real-time Autonomous SAR), a lightweight podded synthetic aperture radar; and N-WAPSS-16 (Nighttime Wide-Area Persistent Surveillance Sensor-16Mpix), a MWIR large array gimbaled system. From these sensors, detected target cues are automatically sent to the NRL/SDL developed EyePod, a high-resolution, narrow FOV EO/IR sensor, for target inspection. In addition to this cooperative data collection, EyePod's real-time, autonomous target tracking capabilities will be demonstrated. Preliminary results and target analysis will be presented.

  9. Canopy reflectance related to marsh dieback onset and progression in Coastal Louisiana

    USGS Publications Warehouse

    Ramsey, Elijah W.; Rangoonwala, A.

    2006-01-01

    In this study, we extended previous work linking leaf spectral changes, dieback onset, and progression of Spartina alterniflora marshes to changes in site-specific canopy reflectance spectra. First, we obtained canopy reflectance spectra (approximately 20 m ground resolution) from the marsh sites occupied during the leaf spectral analyses and from additional sites exhibiting visual signs of dieback. Subsequently, the canopy spectra were analyzed at two spectral scales: the first scale corresponded to whole-spectra sensors, such as the NASA Earth Observing-1 (EO-1) Hyperion, and the second scale corresponded to broadband spectral sensors, such as the EO-1 Advanced Land Imager and the Landsat Enhanced Thematic Mapper. In the whole-spectra analysis, spectral indicators were generated from the whole canopy spectra (about 400 nm to 1,000 nm) by extracting typical dead and healthy marsh spectra, and subsequently using them to determine the percent composition of all canopy reflectance spectra. Percent compositions were then used to classify canopy spectra at each field site into groups exhibiting similar levels of dieback progression ranging from relatively healthy to completely dead. In the broadband reflectance analysis, blue, green, red, red-edge, and near infrared (NIR) spectral bands and NIR/green and NIR/red transforms were extracted from the canopy spectra. Spectral band and band transform indicators of marsh dieback and progression were generated by relating them to marsh status indicators derived from classifications of the 35 mm slides collected at the same time as the canopy reflectance recordings. The whole spectra and broadband spectral indicators were both able to distinguish (a) healthy marsh, (b) live marsh impacted by dieback, and (c) dead marsh, and they both provided some discrimination of dieback progression. Whole-spectra resolution sensors like the EO-1 Hyperion, however, offered an enhanced ability to categorize dieback progression. ?? 2006 American Society for Photogrammetry and Remote Sensing.

  10. Evaluation of georeferencing methods with respect to their suitability to address unsimilarity between the image to be referenced and the reference image

    NASA Astrophysics Data System (ADS)

    Brüstle, Stefan; Erdnüß, Bastian

    2016-10-01

    In recent years, operational costs of unmanned aircraft systems (UAS) have been massively decreasing. New sensors satisfying weight and size restrictions of even small UAS cover many different spectral ranges and spatial resolutions. This results in airborne imagery having become more and more available. Such imagery is used to address many different tasks in various fields of application. For many of those tasks, not only the content of the imagery itself is of interest, but also its spatial location. This requires the imagery to be properly georeferenced. Many UAS have an integrated GPS receiver together with some kind of INS device acquiring the sensor orientation to provide the georeference. However, both GPS and INS data can easily become unavailable for a period of time during a flight, e.g. due to sensor malfunction, transmission problems or jamming. Imagery gathered during such times lacks georeference. Moreover, even in datasets not affected by such problems, GPS and INS inaccuracies together with a potentially poor knowledge of ground elevation can render location information accuracy less than sufficient for a given task. To provide or improve the georeference of an image affected by this, an image to reference registration can be performed if a suitable reference is available, e.g. a georeferenced orthophoto covering the area of the image to be georeferenced. Registration and thus georeferencing is achieved by determining a transformation between the image to be referenced and the reference which maximizes the coincidence of relevant structures present both in the former and the latter. Many methods have been developed to accomplish this task. Regardless of their differences they usually tend to perform the better the more similar an image and a reference are in appearance. This contribution evaluates a selection of such methods all differing in the type of structure they use for the assessment of coincidence with respect to their ability to tolerate unsimilarity in appearance. Similarity in appearance is mainly dependent on the following aspects, namely the similarity of abstraction levels (Is the reference e.g. an orthophoto or a topographical map?), the similarity of sensor types and spectral bands (Is the image e.g. a SAR image and the reference a passively sensed one? Was e.g. a NIR sensor used capturing the image while a VIS sensor was used in the reference?), the similarity of resolutions (Is the ground sampling distance of the reference comparable to the one of the image?), the similarity of capture parameters (Are e.g. the viewing angles comparable in the image and in the reference?) and the similarity concerning the image content (Was there e.g. snow coverage present when the image was captured while this was not the case when the reference was captured?). The evaluation is done by determining the performance of each method with a set of image to be referenced and reference pairs representing various degrees of unsimilarity with respect to each of the above mentioned aspects of similarity.

  11. High spatial resolution LWIR hyperspectral sensor

    NASA Astrophysics Data System (ADS)

    Roberts, Carson B.; Bodkin, Andrew; Daly, James T.; Meola, Joseph

    2015-06-01

    Presented is a new hyperspectral imager design based on multiple slit scanning. This represents an innovation in the classic trade-off between speed and resolution. This LWIR design has been able to produce data-cubes at 3 times the rate of conventional single slit scan devices. The instrument has a built-in radiometric and spectral calibrator.

  12. Multispectral, Fluorescent and Photoplethysmographic Imaging for Remote Skin Assessment

    PubMed Central

    Spigulis, Janis

    2017-01-01

    Optical tissue imaging has several advantages over the routine clinical imaging methods, including non-invasiveness (it does not change the structure of tissues), remote operation (it avoids infections) and the ability to quantify the tissue condition by means of specific image parameters. Dermatologists and other skin experts need compact (preferably pocket-size), self-sustaining and easy-to-use imaging devices. The operational principles and designs of ten portable in-vivo skin imaging prototypes developed at the Biophotonics Laboratory of Institute of Atomic Physics and Spectroscopy, University of Latvia during the recent five years are presented in this paper. Four groups of imaging devices are considered. Multi-spectral imagers offer possibilities for distant mapping of specific skin parameters, thus facilitating better diagnostics of skin malformations. Autofluorescence intensity and photobleaching rate imagers show a promising potential for skin tumor identification and margin delineation. Photoplethysmography video-imagers ensure remote detection of cutaneous blood pulsations and can provide real-time information on cardiovascular parameters and anesthesia efficiency. Multimodal skin imagers perform several of the abovementioned functions by taking a number of spectral and video images with the same image sensor. Design details of the developed prototypes and results of clinical tests illustrating their functionality are presented and discussed. PMID:28534815

  13. Noise Power Spectrum Measurements in Digital Imaging With Gain Nonuniformity Correction.

    PubMed

    Kim, Dong Sik

    2016-08-01

    The noise power spectrum (NPS) of an image sensor provides the spectral noise properties needed to evaluate sensor performance. Hence, measuring an accurate NPS is important. However, the fixed pattern noise from the sensor's nonuniform gain inflates the NPS, which is measured from images acquired by the sensor. Detrending the low-frequency fixed pattern is traditionally used to accurately measure NPS. However, detrending methods cannot remove high-frequency fixed patterns. In order to efficiently correct the fixed pattern noise, a gain-correction technique based on the gain map can be used. The gain map is generated using the average of uniformly illuminated images without any objects. Increasing the number of images n for averaging can reduce the remaining photon noise in the gain map and yield accurate NPS values. However, for practical finite n , the photon noise also significantly inflates NPS. In this paper, a nonuniform-gain image formation model is proposed and the performance of the gain correction is theoretically analyzed in terms of the signal-to-noise ratio (SNR). It is shown that the SNR is O(√n) . An NPS measurement algorithm based on the gain map is then proposed for any given n . Under a weak nonuniform gain assumption, another measurement algorithm based on the image difference is also proposed. For real radiography image detectors, the proposed algorithms are compared with traditional detrending and subtraction methods, and it is shown that as few as two images ( n=1 ) can provide an accurate NPS because of the compensation constant (1+1/n) .

  14. Imaging Grating Spectrometer (I-GRASP) for Solar Soft X-Ray Spectral Measurements in Critically Under-Observed 0.5 - 7 nm Spectral Range

    NASA Astrophysics Data System (ADS)

    Didkovsky, L. V.; Wieman, S. R.; Chao, W.; Woods, T. N.; Jones, A. R.; Thiemann, E.; Mason, J. P.

    2016-12-01

    We discuss science and technology advantages of the Imaging Grating Spectrometer (I-GRASP) based on a novel transmission diffracting grating (TDG) made possible by technology for fabricating Fresnel zone plates (ZPs) developed at the Lawrence Berkeley National Laboratory (LBNL). Older version TDGs with 200 nm period available in the 1990s became a proven technology for providing 21 years of regular measurements of solar EUV irradiance. I-GRASP incorporates an advanced TDG with a grating period of 50 nm providing four times better diffraction dispersion than the 200 nm period gratings used in the SOHO/CELIAS/SEM, the SDO/EVE/ESP flight spectrophotometers, and the EVE/SAM sounding rocket channel. Such new technology for the TDG combined with a back-illuminated 2000 x 1504 CMOS image sensor with 7 micron pixels, will provide spatially-and-spectrally resolved images and spectra from individual Active Regions (ARs) and solar flares with high (0.15 nm) spectral resolution. Such measurements are not available in the spectral band from about 2 to 6 nm from existing or planned spectrographs and will be significantly important to study ARs and solar flare temperatures and dynamics, to improve existing spectral models, e.g. CHIANTI, and to better understand processes in the Earth's atmosphere processes. To test this novel technology, we have proposed to the NASA LCAS program an I-GRASP version for a sounding rocket flight to increase the TDG TRL to a level appropriate for future CubeSat projects.

  15. Intra-cavity upconversion to 631 nm of images illuminated by an eye-safe ASE source at 1550 nm.

    PubMed

    Torregrosa, A J; Maestre, H; Capmany, J

    2015-11-15

    We report an image wavelength upconversion system. The system mixes an incoming image at around 1550 nm (eye-safe region) illuminated by an amplified spontaneous emission (ASE) fiber source with a Gaussian beam at 1064 nm generated in a continuous-wave diode-pumped Nd(3+):GdVO(4) laser. Mixing takes place in a periodically poled lithium niobate (PPLN) crystal placed intra-cavity. The upconverted image obtained by sum-frequency mixing falls around the 631 nm red spectral region, well within the spectral response of standard silicon focal plane array bi-dimensional sensors, commonly used in charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) video cameras, and of most image intensifiers. The use of ASE illumination benefits from a noticeable increase in the field of view (FOV) that can be upconverted with regard to using coherent laser illumination. The upconverted power allows us to capture real-time video in a standard nonintensified CCD camera.

  16. Development of a fusion approach selection tool

    NASA Astrophysics Data System (ADS)

    Pohl, C.; Zeng, Y.

    2015-06-01

    During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.

  17. MWIR imaging spectrometer with digital time delay integration for remote sensing and characterization of solar system objects

    NASA Astrophysics Data System (ADS)

    Kendrick, Stephen E.; Harwit, Alex; Kaplan, Michael; Smythe, William D.

    2007-09-01

    An MWIR TDI (Time Delay and Integration) Imager and Spectrometer (MTIS) instrument for characterizing from orbit the moons of Jupiter and Saturn is proposed. Novel to this instrument is the planned implementation of a digital TDI detector array and an innovative imaging/spectroscopic architecture. Digital TDI enables a higher SNR for high spatial resolution surface mapping of Titan and Enceladus and for improved spectral discrimination and resolution at Europa. The MTIS imaging/spectroscopic architecture combines a high spatial resolution coarse wavelength resolution imaging spectrometer with a hyperspectral sensor to spectrally decompose a portion of the data adjacent to the data sampled in the imaging spectrometer. The MTIS instrument thus maps with high spatial resolution a planetary object while spectrally decomposing enough of the data that identification of the constituent materials is highly likely. Additionally, digital TDI systems have the ability to enable the rejection of radiation induced spikes in high radiation environments (Europa) and the ability to image in low light levels (Titan and Enceladus). The ability to image moving objects that might be missed utilizing a conventional TDI system is an added advantage and is particularly important for characterizing atmospheric effects and separating atmospheric and surface components. This can be accomplished with on-orbit processing or collecting and returning individual non co-added frames.

  18. The Short Wave Aerostat-Mounted Imager (SWAMI): A novel platform for acquiring remotely sensed data from a tethered balloon

    USGS Publications Warehouse

    Vierling, L.A.; Fersdahl, M.; Chen, X.; Li, Z.; Zimmerman, P.

    2006-01-01

    We describe a new remote sensing system called the Short Wave Aerostat-Mounted Imager (SWAMI). The SWAMI is designed to acquire co-located video imagery and hyperspectral data to study basic remote sensing questions and to link landscape level trace gas fluxes with spatially and temporally appropriate spectral observations. The SWAMI can fly at altitudes up to 2 km above ground level to bridge the spatial gap between radiometric measurements collected near the surface and those acquired by other aircraft or satellites. The SWAMI platform consists of a dual channel hyperspectral spectroradiometer, video camera, GPS, thermal infrared sensor, and several meteorological and control sensors. All SWAMI functions (e.g. data acquisition and sensor pointing) can be controlled from the ground via wireless transmission. Sample data from the sampling platform are presented, along with several potential scientific applications of SWAMI data.

  19. Limb Viewing Hyper Spectral Imager (LiVHySI) for airglow measurements onboard YOUTHSAT-1

    NASA Astrophysics Data System (ADS)

    Bisht, R. S.; Hait, A. K.; Babu, P. N.; Sarkar, S. S.; Benerji, A.; Biswas, A.; Saji, A. K.; Samudraiah, D. R. M.; Kirankumar, A. S.; Pant, T. K.; Parimalarangan, T.

    2014-08-01

    The Limb Viewing Hyper Spectral Imager (LiVHySI) is one of the Indian payloads onboard YOUTHSAT (inclination 98.73°, apogee 817 km) launched in April, 2011. The Hyper-spectral imager has been operated in Earth’s limb viewing mode to measure airglow emissions in the spectral range 550-900 nm, from terrestrial upper atmosphere (i.e. 80 km altitude and above) with a line-of-sight range of about 3200 km. The altitude coverage is about 500 km with command selectable lowest altitude. This imaging spectrometer employs a Linearly Variable Filter (LVF) to generate the spectrum and an Active Pixel Sensor (APS) area array of 256 × 512 pixels, placed in close proximity of the LVF as detector. The spectral sampling is done at 1.06 nm interval. The optics used is an eight element f/2 telecentric lens system with 80 mm effective focal length. The detector is aligned with respect to the LVF such that its 512 pixel dimension covers the spectral range. The radiometric sensitivity of the imager is about 20 Rayleigh at noise floor through the signal integration for 10 s at wavelength 630 nm. The imager is being operated during the eclipsed portion of satellite orbits. The integration in the time/spatial domain could be chosen depending upon the season, solar and geomagnetic activity and/or specific target area. This paper primarily aims at describing LiVHySI, its in-orbit operations, quality, potential of the data and its first observations. The images reveal the thermospheric airglow at 630 nm to be the most prominent. These first LiVHySI observations carried out on the night of 21st April, 2011 are presented here, while the variability exhibited by the thermospheric nightglow at O(1D) 630 nm has been described in detail.

  20. Informed Source Separation of Atmospheric and Surface Signal Contributions in Shortwave Hyperspectral Imagery using Non-negative Matrix Factorization

    NASA Astrophysics Data System (ADS)

    Wright, L.; Coddington, O.; Pilewskie, P.

    2015-12-01

    Current challenges in Earth remote sensing require improved instrument spectral resolution, spectral coverage, and radiometric accuracy. Hyperspectral instruments, deployed on both aircraft and spacecraft, are a growing class of Earth observing sensors designed to meet these challenges. They collect large amounts of spectral data, allowing thorough characterization of both atmospheric and surface properties. The higher accuracy and increased spectral and spatial resolutions of new imagers require new numerical approaches for processing imagery and separating surface and atmospheric signals. One potential approach is source separation, which allows us to determine the underlying physical causes of observed changes. Improved signal separation will allow hyperspectral instruments to better address key science questions relevant to climate change, including land-use changes, trends in clouds and atmospheric water vapor, and aerosol characteristics. In this work, we investigate a Non-negative Matrix Factorization (NMF) method for the separation of atmospheric and land surface signal sources. NMF offers marked benefits over other commonly employed techniques, including non-negativity, which avoids physically impossible results, and adaptability, which allows the method to be tailored to hyperspectral source separation. We adapt our NMF algorithm to distinguish between contributions from different physically distinct sources by introducing constraints on spectral and spatial variability and by using library spectra to inform separation. We evaluate our NMF algorithm with simulated hyperspectral images as well as hyperspectral imagery from several instruments including, the NASA Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), NASA Hyperspectral Imager for the Coastal Ocean (HICO) and National Ecological Observatory Network (NEON) Imaging Spectrometer.

  1. Hyperspectral imaging with deformable gratings fabricated with metal-elastomer nanocomposites

    NASA Astrophysics Data System (ADS)

    Potenza, Marco A. C.; Nazzari, Daniele; Cremonesi, Llorenç; Denti, Ilaria; Milani, Paolo

    2017-11-01

    We report the fabrication and characterization of a simple and compact hyperspectral imaging setup based on a stretchable diffraction grating made with a metal-polymer nanocomposite. The nanocomposite is produced by implanting Ag clusters in a poly(dimethylsiloxane) film by supersonic cluster beam implantation. The deformable grating has curved grooves and is imposed on a concave cylindrical surface, thus obtaining optical power in two orthogonal directions. Both diffractive and optical powers are obtained by reflection, thus realizing a diffractive-catoptric optical device. This makes it easier to minimize aberrations. We prove that, despite the extended spectral range and the simplified optical scheme, it is actually possible to work with a traditional CCD sensor and achieve a good spectral and spatial resolution.

  2. Nanotunneling Junction-based Hyperspectal Polarimetric Photodetector and Detection Method

    NASA Technical Reports Server (NTRS)

    Son, Kyung-ah (Inventor); Moon, Jeongsun J. (Inventor); Chattopadhyay, Goutam (Inventor); Liao, Anna (Inventor); Ting, David (Inventor)

    2009-01-01

    A photodetector, detector array, and method of operation thereof in which nanojunctions are formed by crossing layers of nanowires. The crossing nanowires are separated by a few nm thick electrical barrier layer which allows tunneling. Each nanojunction is coupled to a slot antenna for efficient and frequency-selective coupling to photo signals. The nanojunctions formed at the intersection of the crossing wires defines a vertical tunneling diode that rectifies the AC signal from a coupled antenna and generates a DC signal suitable for reforming a video image. The nanojunction sensor allows multi/hyper spectral imaging of radiation within a spectral band ranging from terahertz to visible light, and including infrared (IR) radiation. This new detection approach also offers unprecedented speed, sensitivity and fidelity at room temperature.

  3. Automated feature extraction and classification from image sources

    USGS Publications Warehouse

    ,

    1995-01-01

    The U.S. Department of the Interior, U.S. Geological Survey (USGS), and Unisys Corporation have completed a cooperative research and development agreement (CRADA) to explore automated feature extraction and classification from image sources. The CRADA helped the USGS define the spectral and spatial resolution characteristics of airborne and satellite imaging sensors necessary to meet base cartographic and land use and land cover feature classification requirements and help develop future automated geographic and cartographic data production capabilities. The USGS is seeking a new commercial partner to continue automated feature extraction and classification research and development.

  4. Statistical relative gain calculation for Landsat 8

    NASA Astrophysics Data System (ADS)

    Anderson, Cody; Helder, Dennis L.; Jeno, Drake

    2017-09-01

    The Landsat 8 Operational Land Imager (OLI) is an optical multispectral push-broom sensor with a focal plane consisting of over 7000 detectors per spectral band. Each of the individual imaging detectors contributes one column of pixels to an image. Any difference in the response between neighboring detectors may result in a visible stripe or band in the imagery. An accurate estimate of each detector's relative gain is needed to account for any differences between detector responses. This paper describes a procedure for estimating relative gains which uses normally acquired Earth viewing statistics.

  5. An Airborne Conical Scanning Millimeter-Wave Imaging Radiometer (CoSMIR)

    NASA Technical Reports Server (NTRS)

    Piepmeier, J.; Racette, P.; Wang, J.; Crites, A.; Doiron, T.; Engler, C.; Lecha, J.; Powers, M.; Simon, E.; Triesky, M.; hide

    2001-01-01

    An airborne Conical Scanning Millimeter-wave Imaging Radiometer (CoSMIR) for high-altitude observations from the NASA Research Aircraft (ER-2) is discussed. The primary application of the CoSMIR is water vapor profile remote sensing. Four radiometers operating at 50 (three channels), 92, 150, and 183 (three channels) GHz provide spectral coverage identical to nine of the Special Sensor Microwave Imager/Sounder (SSMIS) high-frequency channels. Constant polarization-basis conical and cross-track scanning capabilities are achieved using an elevation-under-azimuth two-axis gimbals.

  6. Statistical relative gain calculation for Landsat 8

    USGS Publications Warehouse

    Anderson (CTR), Cody; Helder, Dennis; Jeno (CTR), Drake

    2017-01-01

    The Landsat 8 Operational Land Imager (OLI) is an optical multispectral push-broom sensor with a focal plane consisting of over 7000 detectors per spectral band. Each of the individual imaging detectors contributes one column of pixels to an image. Any difference in the response between neighboring detectors may result in a visible stripe or band in the imagery. An accurate estimate of each detector’s relative gain is needed to account for any differences between detector responses. This paper describes a procedure for estimating relative gains which uses normally acquired Earth viewing statistics.

  7. Ultrahigh sensitivity endoscopic camera using a new CMOS image sensor: providing with clear images under low illumination in addition to fluorescent images.

    PubMed

    Aoki, Hisae; Yamashita, Hiromasa; Mori, Toshiyuki; Fukuyo, Tsuneo; Chiba, Toshio

    2014-11-01

    We developed a new ultrahigh-sensitive CMOS camera using a specific sensor that has a wide range of spectral sensitivity characteristics. The objective of this study is to present our updated endoscopic technology that has successfully integrated two innovative functions; ultrasensitive imaging as well as advanced fluorescent viewing. Two different experiments were conducted. One was carried out to evaluate the function of the ultrahigh-sensitive camera. The other was to test the availability of the newly developed sensor and its performance as a fluorescence endoscope. In both studies, the distance from the endoscopic tip to the target was varied and those endoscopic images in each setting were taken for further comparison. In the first experiment, the 3-CCD camera failed to display the clear images under low illumination, and the target was hardly seen. In contrast, the CMOS camera was able to display the targets regardless of the camera-target distance under low illumination. Under high illumination, imaging quality given by both cameras was quite alike. In the second experiment as a fluorescence endoscope, the CMOS camera was capable of clearly showing the fluorescent-activated organs. The ultrahigh sensitivity CMOS HD endoscopic camera is expected to provide us with clear images under low illumination in addition to the fluorescent images under high illumination in the field of laparoscopic surgery.

  8. Assessment of the Short-Term Radiometric Stability between Terra MODIS and Landsat 7 ETM+ Sensors

    NASA Technical Reports Server (NTRS)

    Choi, Taeyoung; Xiong, Xiaxiong; Chander, G.; Angal, Amit

    2009-01-01

    The Landsat 7 (L7) Enhanced Thematic Mapper (ETM+) sensor was launched on April 15th, 1999 and has been in operation for over nine years. It has six reflective solar spectral bands located in the visible and shortwave infrared part of the electromagnetic spectrum (0.5 - 2.5 micron) at a spatial resolution of 30 m. The on-board calibrators are used to monitor the on-orbit sensor system changes. The ETM+ performs solar calibrations using on-board Full Aperture Solar Calibrator (FASC) and the Partial Aperture Solar Calibrator (PASC). The Internal Calibrator Lamp (IC) lamps, a blackbody and shutter optics constitute the on-orbit calibration mechanism for ETM+. On 31 May 2003, a malfunction of the scan-line corrector (SLC) mirror assembly resulted in the loss of approximately 22% of the normal scene area. The missing data affects most of the image with scan gaps varying in width from one pixel or less near the centre of the image to 14 pixels along the east and west edges of the image, creating a wedge-shaped pattern. However, the SLC failure has no impacts on the radiometric performance of the valid pixels. On December 18, 1999, the Moderate Resolution Imaging Spectroradiometer (MODIS) Proto-Flight Model (PFM) was launched on-board the NASA's EOS Terra spacecraft. Terra MODIS has 36 spectral bands with wavelengths ranging from 0.41 to 14.5 micron and collects data over a wide field of view angle (+/-55 deg) at three nadir spatial resolutions of 250 m, 500 in 1 km for bands 1 to 2, 3 to 7, and 8 to 36, respectively. It has 20 reflective solar bands (RSB) with spectral wavelengths from 0.41 to 2.1 micron. The RSB radiometric calibration is performed by using on-board solar diffuser (SD), solar diffuser stability monitor (SDSM), space-view (SV), and spectro-radiometric calibration assembly (SRCA). Through the SV port, periodic lunar observations are used to track radiometric response changes at different angles of incidence (AOI) of the scan mirror. As a part of the AM Constellation satellites, Terra MODIS flies approximately 30 minutes behind L7 ETM+ in the same orbit. The orbit of L7 is repetitive, circular, sunsynchronous, and near polar at a nominal altitude of 705 km (438 miles) at the Equator. The spacecraft crosses the Equator from north to south on a descending node between 10:00 AM and 10:15 AM. Circling the Earth at 7.5 km/sec, each orbit takes nearly 99 minutes. The spacecraft completes just over 14 orbits per day, covering the entire Earth between 81 degrees north and south latitude every 16 days. The longest continuous imaging swath that L7 sensor can collect is for a 14-minute subinterval contact period which is equivalent to 35 full WRS-2 scenes. On the other hand, Terra can provide the entire corresponding orbit with wider swath at any given ETM+ collection without contact time limitation. There are six spectral matching band pairs between MODIS (bands 3, 4, 1, 2, 6, 7) and ETM+ (bands 1, 2, 3, 4, 5, 7) sensor. MODIS has narrower spectral responses than ETM+ in all the bands. A short-term radiometric stability was evaluated using continuous ETM+ scenes within the contact period and the corresponding half orbit MODIS scenes. The near simultaneous earth observations (SNO) were limited by the smaller swath size of ETM+ (187 km) as compared to MODIS (2330 km). Two sets of continuous granules for MODIS and ETM+ were selected and mosaiced based on pixel geolocation information for non cloudy pixels over the North American continent. The Top-of- Atmosphere (TOA) reflectances were computed for the spectrally matching bands between ETM+ and MODIS over the regions of interest (ROI). The matching pixel pairs were aggregated from a finer to a coarser pixel resolution and the TOA reflectance values covering a wide dynamic range of the sensors were compared and analyzed. Considering the uncertainties of the absolute calibration of the both sensors, radiometric stability was verified for the band pairs. The Railroad Valley Playa, Nada (RVPN) was included in the path of this continuous orbit, which served as a verification point between the shortterm and the long-term trending results from previous studies. This work focuses on monitoring the short-term on-orbit stability of MODIS and the ETM+ RSB. It also provides an assessment of the absolute calibration differences between the two sensors over their wide dynamic ranges.

  9. Linear variable narrow bandpass optical filters in the far infrared (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Rahmlow, Thomas D.

    2017-06-01

    We are currently developing linear variable filters (LVF) with very high wavelength gradients. In the visible, these filters have a wavelength gradient of 50 to 100 nm/mm. In the infrared, the wavelength gradient covers the range of 500 to 900 microns/mm. Filter designs include band pass, long pass and ulta-high performance anti-reflection coatings. The active area of the filters is on the order of 5 to 30 mm along the wavelength gradient and up to 30 mm in the orthogonal, constant wavelength direction. Variation in performance along the constant direction is less than 1%. Repeatable performance from filter to filter, absolute placement of the filter relative to a substrate fiducial and, high in-band transmission across the full spectral band is demonstrated. Applications include order sorting filters, direct replacement of the spectrometer and hyper-spectral imaging. Off-band rejection with an optical density of greater than 3 allows use of the filter as an order sorting filter. The linear variable order sorting filters replaces other filter types such as block filters. The disadvantage of block filters is the loss of pixels due to the transition between filter blocks. The LVF is a continuous gradient without a discrete transition between filter wavelength regions. If the LVF is designed as a narrow band pass filter, it can be used in place of a spectrometer thus reducing overall sensor weight and cost while improving the robustness of the sensor. By controlling the orthogonal performance (smile) the LVF can be sized to the dimensions of the detector. When imaging on to a 2 dimensional array and operating the sensor in a push broom configuration, the LVF spectrometer performs as a hyper-spectral imager. This paper presents performance of LVF fabricated in the far infrared on substrates sized to available detectors. The impact of spot size, F-number and filter characterization are presented. Results are also compared to extended visible LVF filters.

  10. Human perception testing methodology for evaluating EO/IR imaging systems

    NASA Astrophysics Data System (ADS)

    Graybeal, John J.; Monfort, Samuel S.; Du Bosq, Todd W.; Familoni, Babajide O.

    2018-04-01

    The U.S. Army's RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) Perception Lab is tasked with supporting the development of sensor systems for the U.S. Army by evaluating human performance of emerging technologies. Typical research questions involve detection, recognition and identification as a function of range, blur, noise, spectral band, image processing techniques, image characteristics, and human factors. NVESD's Perception Lab provides an essential bridge between the physics of the imaging systems and the performance of the human operator. In addition to quantifying sensor performance, perception test results can also be used to generate models of human performance and to drive future sensor requirements. The Perception Lab seeks to develop and employ scientifically valid and efficient perception testing procedures within the practical constraints of Army research, including rapid development timelines for critical technologies, unique guidelines for ethical testing of Army personnel, and limited resources. The purpose of this paper is to describe NVESD Perception Lab capabilities, recent methodological improvements designed to align our methodology more closely with scientific best practice, and to discuss goals for future improvements and expanded capabilities. Specifically, we discuss modifying our methodology to improve training, to account for human fatigue, to improve assessments of human performance, and to increase experimental design consultation provided by research psychologists. Ultimately, this paper outlines a template for assessing human perception and overall system performance related to EO/IR imaging systems.

  11. Imaging Sensor Development for Scattering Atmospheres.

    DTIC Science & Technology

    1983-03-01

    subtracted out- put from a CCD imaging detector for a single frame can be written as A _ S (2-22) V B + B{ shot noise thermal noise , dark current shot ...addition, the spectral re- sponses of current devices are limited to the visible region and their sensitivities are not very high. Solid state detectors ...are generally much more sensitive than spatial light modulators, and some (e.g., HgCdTe detectors ) can re- spond up to the 10 um region. Several

  12. The Physics of Imaging with Remote Sensors : Photon State Space & Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Davis, Anthony B.

    2012-01-01

    Standard (mono-pixel/steady-source) retrieval methodology is reaching its fundamental limit with access to multi-angle/multi-spectral photo- polarimetry. Next... Two emerging new classes of retrieval algorithm worth nurturing: multi-pixel time-domain Wave-radiometry transition regimes, and more... Cross-fertilization with bio-medical imaging. Physics-based remote sensing: - What is "photon state space?" - What is "radiative transfer?" - Is "the end" in sight? Two wide-open frontiers! center dot Examples (with variations.

  13. Accommodating multiple illumination sources in an imaging colorimetry environment

    NASA Astrophysics Data System (ADS)

    Tobin, Kenneth W., Jr.; Goddard, James S., Jr.; Hunt, Martin A.; Hylton, Kathy W.; Karnowski, Thomas P.; Simpson, Marc L.; Richards, Roger K.; Treece, Dale A.

    2000-03-01

    Researchers at the Oak Ridge National Laboratory have been developing a method for measuring color quality in textile products using a tri-stimulus color camera system. Initial results of the Imaging Tristimulus Colorimeter (ITC) were reported during 1999. These results showed that the projection onto convex sets (POCS) approach to color estimation could be applied to complex printed patterns on textile products with high accuracy and repeatability. Image-based color sensors used for on-line measurement are not colorimetric by nature and require a non-linear transformation of the component colors based on the spectral properties of the incident illumination, imaging sensor, and the actual textile color. Our earlier work reports these results for a broad-band, smoothly varying D65 standard illuminant. To move the measurement to the on-line environment with continuously manufactured textile webs, the illumination source becomes problematic. The spectral content of these light sources varies substantially from the D65 standard illuminant and can greatly impact the measurement performance of the POCS system. Although absolute color measurements are difficult to make under different illumination, referential measurements to monitor color drift provide a useful indication of product quality. Modifications to the ITC system have been implemented to enable the study of different light sources. These results and the subsequent analysis of relative color measurements will be reported for textile products.

  14. Inflight calibration of the modular airborne imaging spectrometer (MAIS) and its application to reflectance retrieval

    NASA Astrophysics Data System (ADS)

    Min, Xiangjun; Zhu, Yonghao

    1998-08-01

    Inflight experiment of Modular Airborne Imaging Spectrometer (MAIS) and ground-based measurements using GER MARK-V spectroradiometer simultaneously with the MAIS overpass were performed during Autumn 1995 at the semiarid area of Inner Mongolia, China. Based on these measurements and MAIS image data, we designed a method for the radiometric calibration of MAIS sensor using 6S and LOWTRAN 7 codes. The results show that the uncertainty of MAIS calibration is about 8% in the visible and near infrared wavelengths (0.4 - 1.2 micrometer). To verify our calibration algorithm, the calibrated results of MAIS sensor was used to derive the ground reflectances. The accuracy of reflectance retrieval is about 8.5% in the spectral range of 0.4 to 1.2 micrometer, i.e., the uncertainty of derived near-nadir reflectances is within 0.01 - 0.05 in reflectance unit at ground reflectance between 3% and 50%. The distinguishing feature of the ground-based measurements, which will be paid special attention in this paper, is that obtaining simultaneously the reflectance factors of the calibration target, atmospheric optical depth, and water vapor abundance from the same one set of measurement data by only one suit of instruments. The analysis indicates that the method presented here is suitable to the quantitative analysis of imaging spectral data in China.

  15. Commodity cluster and hardware-based massively parallel implementations of hyperspectral imaging algorithms

    NASA Astrophysics Data System (ADS)

    Plaza, Antonio; Chang, Chein-I.; Plaza, Javier; Valencia, David

    2006-05-01

    The incorporation of hyperspectral sensors aboard airborne/satellite platforms is currently producing a nearly continual stream of multidimensional image data, and this high data volume has soon introduced new processing challenges. The price paid for the wealth spatial and spectral information available from hyperspectral sensors is the enormous amounts of data that they generate. Several applications exist, however, where having the desired information calculated quickly enough for practical use is highly desirable. High computing performance of algorithm analysis is particularly important in homeland defense and security applications, in which swift decisions often involve detection of (sub-pixel) military targets (including hostile weaponry, camouflage, concealment, and decoys) or chemical/biological agents. In order to speed-up computational performance of hyperspectral imaging algorithms, this paper develops several fast parallel data processing techniques. Techniques include four classes of algorithms: (1) unsupervised classification, (2) spectral unmixing, and (3) automatic target recognition, and (4) onboard data compression. A massively parallel Beowulf cluster (Thunderhead) at NASA's Goddard Space Flight Center in Maryland is used to measure parallel performance of the proposed algorithms. In order to explore the viability of developing onboard, real-time hyperspectral data compression algorithms, a Xilinx Virtex-II field programmable gate array (FPGA) is also used in experiments. Our quantitative and comparative assessment of parallel techniques and strategies may help image analysts in selection of parallel hyperspectral algorithms for specific applications.

  16. Spectral and spatial resolution analysis of multi sensor satellite data for coral reef mapping: Tioman Island, Malaysia

    NASA Astrophysics Data System (ADS)

    Pradhan, Biswajeet; Kabiri, Keivan

    2012-07-01

    This paper describes an assessment of coral reef mapping using multi sensor satellite images such as Landsat ETM, SPOT and IKONOS images for Tioman Island, Malaysia. The study area is known to be one of the best Islands in South East Asia for its unique collection of diversified coral reefs and serves host to thousands of tourists every year. For the coral reef identification, classification and analysis, Landsat ETM, SPOT and IKONOS images were collected processed and classified using hierarchical classification schemes. At first, Decision tree classification method was implemented to separate three main land cover classes i.e. water, rural and vegetation and then maximum likelihood supervised classification method was used to classify these main classes. The accuracy of the classification result is evaluated by a separated test sample set, which is selected based on the fieldwork survey and view interpretation from IKONOS image. Few types of ancillary data in used are: (a) DGPS ground control points; (b) Water quality parameters measured by Hydrolab DS4a; (c) Sea-bed substrates spectrum measured by Unispec and; (d) Landcover observation photos along Tioman island coastal area. The overall accuracy of the final classification result obtained was 92.25% with the kappa coefficient is 0.8940. Key words: Coral reef, Multi-spectral Segmentation, Pixel-Based Classification, Decision Tree, Tioman Island

  17. The Focal Plane Assembly for the Athena X-Ray Integral Field Unit Instrument

    NASA Technical Reports Server (NTRS)

    Jackson, B. D.; Van Weers, H.; van der Kuur, J.; den Hartog, R.; Akamatsu, H.; Argan, A.; Bandler, S. R.; Barbera, M.; Barret, D.; Bruijn, M. P.; hide

    2016-01-01

    This paper summarizes a preliminary design concept for the focal plane assembly of the X-ray Integral Field Unit on the Athena spacecraft, an imaging microcalorimeter that will enable high spectral resolution imaging and point-source spectroscopy. The instrument's sensor array will be a 3840-pixel transition edge sensor (TES) microcalorimeter array, with a frequency domain multiplexed SQUID readout system allowing this large-format sensor array to be operated within the thermal constraints of the instrument's cryogenic system. A second TES detector will be operated in close proximity to the sensor array to detect cosmic rays and secondary particles passing through the sensor array for off-line coincidence detection to identify and reject events caused by the in-orbit high-energy particle background. The detectors, operating at 55 mK, or less, will be thermally isolated from the instrument cryostat's 2 K stage, while shielding and filtering within the FPA will allow the instrument's sensitive sensor array to be operated in the expected environment during both on-ground testing and in-flight operation, including stray light from the cryostat environment, low-energy photons entering through the X-ray aperture, low-frequency magnetic fields, and high-frequency electric fields.

  18. Open architecture of smart sensor suites

    NASA Astrophysics Data System (ADS)

    Müller, Wilmuth; Kuwertz, Achim; Grönwall, Christina; Petersson, Henrik; Dekker, Rob; Reinert, Frank; Ditzel, Maarten

    2017-10-01

    Experiences from recent conflicts show the strong need for smart sensor suites comprising different multi-spectral imaging sensors as core elements as well as additional non-imaging sensors. Smart sensor suites should be part of a smart sensor network - a network of sensors, databases, evaluation stations and user terminals. Its goal is to optimize the use of various information sources for military operations such as situation assessment, intelligence, surveillance, reconnaissance, target recognition and tracking. Such a smart sensor network will enable commanders to achieve higher levels of situational awareness. Within the study at hand, an open system architecture was developed in order to increase the efficiency of sensor suites. The open system architecture for smart sensor suites, based on a system-of-systems approach, enables combining different sensors in multiple physical configurations, such as distributed sensors, co-located sensors combined in a single package, tower-mounted sensors, sensors integrated in a mobile platform, and trigger sensors. The architecture was derived from a set of system requirements and relevant scenarios. Its mode of operation is adaptable to a series of scenarios with respect to relevant objects of interest, activities to be observed, available transmission bandwidth, etc. The presented open architecture is designed in accordance with the NATO Architecture Framework (NAF). The architecture allows smart sensor suites to be part of a surveillance network, linked e.g. to a sensor planning system and a C4ISR center, and to be used in combination with future RPAS (Remotely Piloted Aircraft Systems) for supporting a more flexible dynamic configuration of RPAS payloads.

  19. BTDI detector technology for reconnaissance application

    NASA Astrophysics Data System (ADS)

    Hilbert, Stefan; Eckardt, Andreas; Krutz, David

    2017-11-01

    The Institute of Optical Sensor Systems (OS) at the Robotics and Mechatronics Center of the German Aerospace Center (DLR) has more than 30 years of experience with high-resolution imaging technology. This paper shows the institute's scientific results of the leading-edge detector design in a BTDI (Bidirectional Time Delay and Integration) architecture. This project demonstrates an approved technological design for high or multi-spectral resolution spaceborne instruments. DLR OS and BAE Systems were driving the technology of new detectors and the FPA design for future projects, new manufacturing accuracy in order to keep pace with ambitious scientific and user requirements. Resulting from customer requirements and available technologies the current generation of space borne sensor systems is focusing on VIS/NIR high spectral resolution to meet the requirements on earth and planetary observation systems. The combination of large swath and high-spectral resolution with intelligent control applications and new focal plane concepts opens the door to new remote sensing and smart deep space instruments. The paper gives an overview of the detector development and verification program at DLR on detector module level and key parameters like SNR, linearity, spectral response, quantum efficiency, PRNU, DSNU and MTF.

  20. Active spectral sensor evaluation under varying conditions

    USDA-ARS?s Scientific Manuscript database

    Plant stress has been estimated by spectral signature using both passive and active sensors. As optical sensors measure reflected light from a target, changes in illumination characteristics critically affect sensor response. Active sensors are of benefit in minimizing uncontrolled illumination effe...

  1. Quantum efficiency and dark current evaluation of a backside illuminated CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Vereecke, Bart; Cavaco, Celso; De Munck, Koen; Haspeslagh, Luc; Minoglou, Kyriaki; Moore, George; Sabuncuoglu, Deniz; Tack, Klaas; Wu, Bob; Osman, Haris

    2015-04-01

    We report on the development and characterization of monolithic backside illuminated (BSI) imagers at imec. Different surface passivation, anti-reflective coatings (ARCs), and anneal conditions were implemented and their effect on dark current (DC) and quantum efficiency (QE) are analyzed. Two different single layer ARC materials were developed for visible light and near UV applications, respectively. QE above 75% over the entire visible spectrum range from 400 to 700 nm is measured. In the spectral range from 260 to 400 nm wavelength, QE values above 50% over the entire range are achieved. A new technique, high pressure hydrogen anneal at 20 atm, was applied on photodiodes and improvement in DC of 30% for the BSI imager with HfO2 as ARC as well as for the front side imager was observed. The entire BSI process was developed 200 mm wafers and evaluated on test diode structures. The knowhow is then transferred to real imager sensors arrays.

  2. Developing handheld real time multispectral imager to clinically detect erythema in darkly pigmented skin

    NASA Astrophysics Data System (ADS)

    Kong, Linghua; Sprigle, Stephen; Yi, Dingrong; Wang, Fengtao; Wang, Chao; Liu, Fuhan

    2010-02-01

    Pressure ulcers have been identified as a public health concern by the US government through the Healthy People 2010 initiative and the National Quality Forum (NQF). Currently, no tools are available to assist clinicians in erythema, i.e. the early stage pressure ulcer detection. The results from our previous research (supported by NIH grant) indicate that erythema in different skin tones can be identified using a set of wavelengths 540, 577, 650 and 970nm. This paper will report our recent work which is developing a handheld, point-of-care, clinicallyviable and affordable, real time multispectral imager to detect erythema in persons with darkly pigmented skin. Instead of using traditional filters, e.g. filter wheels, generalized Lyot filter, electrical tunable filter or the methods of dispersing light, e.g. optic-acoustic crystal, a novel custom filter mosaic has been successfully designed and fabricated using lithography and vacuum multi layer film technologies. The filter has been integrated with CMOS and CCD sensors. The filter incorporates four or more different wavelengths within the visual to nearinfrared range each having a narrow bandwidth of 30nm or less. Single wavelength area is chosen as 20.8μx 20.8μ. The filter can be deposited on regular optical glass as substrate or directly on a CMOS and CCD imaging sensor. This design permits a multi-spectral image to be acquired in a single exposure, thereby providing overwhelming convenience in multi spectral imaging acquisition.

  3. Direct determination of surface albedos from satellite imagery

    NASA Technical Reports Server (NTRS)

    Mekler, Y.; Joseph, J. H.

    1983-01-01

    An empirical method to measure the spectral surface albedo of surfaces from Landsat imagery is presented and analyzed. The empiricism in the method is due only to the fact that three parameters of the solution must be determined for each spectral photograph of an image on the basis of independently known albedos at three points. The approach is otherwise based on exact solutions of the radiative transfer equation for upwelling intensity. Application of the method allows the routine construction of spectral albedo maps from satelite imagery, without requiring detailed knowledge of the atmospheric aerosol content, as long as the optical depth is less than 0.75, and of the calibration of the satellite sensor.

  4. Adaptive optics two-photon excited fluorescence lifetime imaging ophthalmoscopy of exogenous fluorophores in mice.

    PubMed

    Feeks, James A; Hunter, Jennifer J

    2017-05-01

    In vivo cellular scale fluorescence lifetime imaging of the mouse retina has the potential to be a sensitive marker of retinal cell health. In this study, we demonstrate fluorescence lifetime imaging of extrinsic fluorophores using adaptive optics fluorescence lifetime imaging ophthalmoscopy (AOFLIO). We recorded AOFLIO images of inner retinal cells labeled with enhanced green fluorescent protein (EGFP) and capillaries labeled with fluorescein. We demonstrate that AOFLIO can be used to differentiate spectrally overlapping fluorophores in the retina. With further refinements, AOFLIO could be used to assess retinal health in early stages of degeneration by utilizing lifetime-based sensors or even fluorophores native to the retina.

  5. Satellite Ocean Color Sensor Design Concepts and Performance Requirements

    NASA Technical Reports Server (NTRS)

    McClain, Charles R.; Meister, Gerhard; Monosmith, Bryan

    2014-01-01

    In late 1978, the National Aeronautics and Space Administration (NASA) launched the Nimbus-7 satellite with the Coastal Zone Color Scanner (CZCS) and several other sensors, all of which provided major advances in Earth remote sensing. The inspiration for the CZCS is usually attributed to an article in Science by Clarke et al. who demonstrated that large changes in open ocean spectral reflectance are correlated to chlorophyll-a concentrations. Chlorophyll-a is the primary photosynthetic pigment in green plants (marine and terrestrial) and is used in estimating primary production, i.e., the amount of carbon fixed into organic matter during photosynthesis. Thus, accurate estimates of global and regional primary production are key to studies of the earth's carbon cycle. Because the investigators used an airborne radiometer, they were able to demonstrate the increased radiance contribution of the atmosphere with altitude that would be a major issue for spaceborne measurements. Since 1978, there has been much progress in satellite ocean color remote sensing such that the technique is well established and is used for climate change science and routine operational environmental monitoring. Also, the science objectives and accompanying methodologies have expanded and evolved through a succession of global missions, e.g., the Ocean Color and Temperature Sensor (OCTS), the Seaviewing Wide Field-of-view Sensor (SeaWiFS), the Moderate Resolution Imaging Spectroradiometer (MODIS), the Medium Resolution Imaging Spectrometer (MERIS), and the Global Imager (GLI). With each advance in science objectives, new and more stringent requirements for sensor capabilities (e.g., spectral coverage) and performance (e.g., signal-to-noise ratio, SNR) are established. The CZCS had four bands for chlorophyll and aerosol corrections. The Ocean Color Imager (OCI) recommended for the NASA Pre-Aerosol, Cloud, and Ocean Ecosystems (PACE) mission includes 5 nanometers hyperspectral coverage from 350 to 800 nanometers with three additional discrete near infrared (NIR) and shortwave infrared (SWIR) ocean aerosol correction bands. Also, to avoid drift in sensor sensitivity from being interpreted as environmental change, climate change research requires rigorous monitoring of sensor stability. For SeaWiFS, monthly lunar imaging accurately tracked stability at an accuracy of approximately 0.1% that allowed the data to be used for climate studies [2]. It is now acknowledged by the international community that future missions and sensor designs need to accommodate lunar calibrations. An overview of ocean color remote sensing and a review of the progress made in ocean color remote sensing and the variety of research applications derived from global satellite ocean color data are provided. The purpose of this chapter is to discuss the design options for ocean color satellite radiometers, performance and testing criteria, and sensor components (optics, detectors, electronics, etc.) that must be integrated into an instrument concept. These ultimately dictate the quality and quantity of data that can be delivered as a trade against mission cost. Historically, science and sensor technology have advanced in a "leap-frog" manner in that sensor design requirements for a mission are defined many years before a sensor is launched and by the end of the mission, perhaps 15-20 years later, science applications and requirements are well beyond the capabilities of the sensor. Section 3 provides a summary of historical mission science objectives and sensor requirements. This progression is expected to continue in the future as long as sensor costs can be constrained to affordable levels and still allow the incorporation of new technologies without incurring unacceptable risk to mission success. The IOCCG Report Number 13 discusses future ocean biology mission Level-1 requirements in depth.

  6. Cross calibration of GF-1 satellite wide field of view sensor with Landsat 8 OLI and HJ-1A HSI

    NASA Astrophysics Data System (ADS)

    Liu, Li; Gao, Hailiang; Pan, Zhiqiang; Gu, Xingfa; Han, Qijin; Zhang, Xuewen

    2018-01-01

    This paper focuses on cross calibrating the GaoFen (GF-1) satellite wide field of view (WFV) sensor using the Landsat 8 Operational Land Imager (OLI) and HuanJing-1A (HJ-1A) hyperspectral imager (HSI) as reference sensors. Two methods are proposed to calculate the spectral band adjustment factor (SBAF). One is based on the HJ-1A HSI image and the other is based on ground-measured reflectance. However, the HSI image and ground-measured reflectance were measured at different dates, as the WFV and OLI imagers passed overhead. Three groups of regions of interest (ROIs) were chosen for cross calibration, based on different selection criteria. Cross-calibration gains with nonzero and zero offsets were both calculated. The results confirmed that the gains with zero offset were better, as they were more consistent over different groups of ROIs and SBAF calculation methods. The uncertainty of this cross calibration was analyzed, and the influence of SBAF was calculated based on different HSI images and ground reflectance spectra. The results showed that the uncertainty of SBAF was <3% for bands 1 to 3. Two other large uncertainties in this cross calibration were variation of atmosphere and low ground reflectance.

  7. Backthinned TDI CCD image sensor design and performance for the Pleiades high resolution Earth observation satellites

    NASA Astrophysics Data System (ADS)

    Materne, A.; Bardoux, A.; Geoffray, H.; Tournier, T.; Kubik, P.; Morris, D.; Wallace, I.; Renard, C.

    2017-11-01

    The PLEIADES-HR Earth observing satellites, under CNES development, combine a 0.7m resolution panchromatic channel, and a multispectral channel allowing a 2.8 m resolution, in 4 spectral bands. The 2 satellites will be placed on a sun-synchronous orbit at an altitude of 695 km. The camera operates in push broom mode, providing images across a 20 km swath. This paper focuses on the specifications, design and performance of the TDI detectors developed by e2v technologies under CNES contract for the panchromatic channel. Design drivers, derived from the mission and satellite requirements, architecture of the sensor and measurement results for key performances of the first prototypes are presented.

  8. Optically based technique for producing merged spectra of water-leaving radiances from ocean color remote sensing.

    PubMed

    Mélin, Frédéric; Zibordi, Giuseppe

    2007-06-20

    An optically based technique is presented that produces merged spectra of normalized water-leaving radiances L(WN) by combining spectral data provided by independent satellite ocean color missions. The assessment of the merging technique is based on a four-year field data series collected by an autonomous above-water radiometer located on the Acqua Alta Oceanographic Tower in the Adriatic Sea. The uncertainties associated with the merged L(WN) obtained from the Sea-viewing Wide Field-of-view Sensor and the Moderate Resolution Imaging Spectroradiometer are consistent with the validation statistics of the individual sensor products. The merging including the third mission Medium Resolution Imaging Spectrometer is also addressed for a reduced ensemble of matchups.

  9. Thermal infrared panoramic imaging sensor

    NASA Astrophysics Data System (ADS)

    Gutin, Mikhail; Tsui, Eddy K.; Gutin, Olga; Wang, Xu-Ming; Gutin, Alexey

    2006-05-01

    Panoramic cameras offer true real-time, 360-degree coverage of the surrounding area, valuable for a variety of defense and security applications, including force protection, asset protection, asset control, security including port security, perimeter security, video surveillance, border control, airport security, coastguard operations, search and rescue, intrusion detection, and many others. Automatic detection, location, and tracking of targets outside protected area ensures maximum protection and at the same time reduces the workload on personnel, increases reliability and confidence of target detection, and enables both man-in-the-loop and fully automated system operation. Thermal imaging provides the benefits of all-weather, 24-hour day/night operation with no downtime. In addition, thermal signatures of different target types facilitate better classification, beyond the limits set by camera's spatial resolution. The useful range of catadioptric panoramic cameras is affected by their limited resolution. In many existing systems the resolution is optics-limited. Reflectors customarily used in catadioptric imagers introduce aberrations that may become significant at large camera apertures, such as required in low-light and thermal imaging. Advantages of panoramic imagers with high image resolution include increased area coverage with fewer cameras, instantaneous full horizon detection, location and tracking of multiple targets simultaneously, extended range, and others. The Automatic Panoramic Thermal Integrated Sensor (APTIS), being jointly developed by Applied Science Innovative, Inc. (ASI) and the Armament Research, Development and Engineering Center (ARDEC) combines the strengths of improved, high-resolution panoramic optics with thermal imaging in the 8 - 14 micron spectral range, leveraged by intelligent video processing for automated detection, location, and tracking of moving targets. The work in progress supports the Future Combat Systems (FCS) and the Intelligent Munitions Systems (IMS). The APTIS is anticipated to operate as an intelligent node in a wireless network of multifunctional nodes that work together to serve in a wide range of applications of homeland security, as well as serve the Army in tasks of improved situational awareness (SA) in defense and offensive operations, and as a sensor node in tactical Intelligence Surveillance Reconnaissance (ISR). The novel ViperView TM high-resolution panoramic thermal imager is the heart of the APTIS system. It features an aberration-corrected omnidirectional imager with small optics designed to match the resolution of a 640x480 pixels IR camera with improved image quality for longer range target detection, classification, and tracking. The same approach is applicable to panoramic cameras working in the visible spectral range. Other components of the ATPIS system include network communications, advanced power management, and wakeup capability. Recent developments include image processing, optical design being expanded into the visible spectral range, and wireless communications design. This paper describes the development status of the APTIS system.

  10. Optical sensors and multiplexing for aircraft engine control

    NASA Astrophysics Data System (ADS)

    Berkcan, Ertugrul

    1993-02-01

    Time division multiplexing of spectral modulation fiber optic sensors for aircraft engine control is presented. The paper addresses the architectural properties, the accuracy, the benefits and problems of different type of sources, the spectral stability and update times using these sources, the size, weight, and power issues, and finally the technology needs regarding FADEC mountability. The fiber optic sensors include temperature, pressure, and position spectral modulation sensors.

  11. Comparison of multi- and hyperspectral imaging data of leaf rust infected wheat plants

    NASA Astrophysics Data System (ADS)

    Franke, Jonas; Menz, Gunter; Oerke, Erich-Christian; Rascher, Uwe

    2005-10-01

    In the context of precision agriculture, several recent studies have focused on detecting crop stress caused by pathogenic fungi. For this purpose, several sensor systems have been used to develop in-field-detection systems or to test possible applications of remote sensing. The objective of this research was to evaluate the potential of different sensor systems for multitemporal monitoring of leaf rust (puccinia recondita) infected wheat crops, with the aim of early detection of infected stands. A comparison between a hyperspectral (120 spectral bands) and a multispectral (3 spectral bands) imaging system shows the benefits and limitations of each approach. Reflectance data of leaf rust infected and fungicide treated control wheat stand boxes (1sqm each) were collected before and until 17 days after inoculation. Plants were grown under controlled conditions in the greenhouse and measurements were taken under consistent illumination conditions. The results of mixture tuned matched filtering analysis showed the suitability of hyperspectral data for early discrimination of leaf rust infected wheat crops due to their higher spectral sensitivity. Five days after inoculation leaf rust infected leaves were detected, although only slight visual symptoms appeared. A clear discrimination between infected and control stands was possible. Multispectral data showed a higher sensitivity to external factors like illumination conditions, causing poor classification accuracy. Nevertheless, if these factors could get under control, even multispectral data may serve a good indicator for infection severity.

  12. A spectral-structural bag-of-features scene classifier for very high spatial resolution remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhao, Bei; Zhong, Yanfei; Zhang, Liangpei

    2016-06-01

    Land-use classification of very high spatial resolution remote sensing (VHSR) imagery is one of the most challenging tasks in the field of remote sensing image processing. However, the land-use classification is hard to be addressed by the land-cover classification techniques, due to the complexity of the land-use scenes. Scene classification is considered to be one of the expected ways to address the land-use classification issue. The commonly used scene classification methods of VHSR imagery are all derived from the computer vision community that mainly deal with terrestrial image recognition. Differing from terrestrial images, VHSR images are taken by looking down with airborne and spaceborne sensors, which leads to the distinct light conditions and spatial configuration of land cover in VHSR imagery. Considering the distinct characteristics, two questions should be answered: (1) Which type or combination of information is suitable for the VHSR imagery scene classification? (2) Which scene classification algorithm is best for VHSR imagery? In this paper, an efficient spectral-structural bag-of-features scene classifier (SSBFC) is proposed to combine the spectral and structural information of VHSR imagery. SSBFC utilizes the first- and second-order statistics (the mean and standard deviation values, MeanStd) as the statistical spectral descriptor for the spectral information of the VHSR imagery, and uses dense scale-invariant feature transform (SIFT) as the structural feature descriptor. From the experimental results, the spectral information works better than the structural information, while the combination of the spectral and structural information is better than any single type of information. Taking the characteristic of the spatial configuration into consideration, SSBFC uses the whole image scene as the scope of the pooling operator, instead of the scope generated by a spatial pyramid (SP) commonly used in terrestrial image classification. The experimental results show that the whole image as the scope of the pooling operator performs better than the scope generated by SP. In addition, SSBFC codes and pools the spectral and structural features separately to avoid mutual interruption between the spectral and structural features. The coding vectors of spectral and structural features are then concatenated into a final coding vector. Finally, SSBFC classifies the final coding vector by support vector machine (SVM) with a histogram intersection kernel (HIK). Compared with the latest scene classification methods, the experimental results with three VHSR datasets demonstrate that the proposed SSBFC performs better than the other classification methods for VHSR image scenes.

  13. ChromAIX2: A large area, high count-rate energy-resolving photon counting ASIC for a Spectral CT Prototype

    NASA Astrophysics Data System (ADS)

    Steadman, Roger; Herrmann, Christoph; Livne, Amir

    2017-08-01

    Spectral CT based on energy-resolving photon counting detectors is expected to deliver additional diagnostic value at a lower dose than current state-of-the-art CT [1]. The capability of simultaneously providing a number of spectrally distinct measurements not only allows distinguishing between photo-electric and Compton interactions but also discriminating contrast agents that exhibit a K-edge discontinuity in the absorption spectrum, referred to as K-edge Imaging [2]. Such detectors are based on direct converting sensors (e.g. CdTe or CdZnTe) and high-rate photon counting electronics. To support the development of Spectral CT and show the feasibility of obtaining rates exceeding 10 Mcps/pixel (Poissonian observed count-rate), the ChromAIX ASIC has been previously reported showing 13.5 Mcps/pixel (150 Mcps/mm2 incident) [3]. The ChromAIX has been improved to offer the possibility of a large area coverage detector, and increased overall performance. The new ASIC is called ChromAIX2, and delivers count-rates exceeding 15 Mcps/pixel with an rms-noise performance of approximately 260 e-. It has an isotropic pixel pitch of 500 μm in an array of 22×32 pixels and is tile-able on three of its sides. The pixel topology consists of a two stage amplifier (CSA and Shaper) and a number of test features allowing to thoroughly characterize the ASIC without a sensor. A total of 5 independent thresholds are also available within each pixel, allowing to acquire 5 spectrally distinct measurements simultaneously. The ASIC also incorporates a baseline restorer to eliminate excess currents induced by the sensor (e.g. dark current and low frequency drifts) which would otherwise cause an energy estimation error. In this paper we report on the inherent electrical performance of the ChromAXI2 as well as measurements obtained with CZT (CdZnTe)/CdTe sensors and X-rays and radioactive sources.

  14. Resolution-enhanced Mapping Spectrometer

    NASA Technical Reports Server (NTRS)

    Kumer, J. B.; Aubrun, J. N.; Rosenberg, W. J.; Roche, A. E.

    1993-01-01

    A familiar mapping spectrometer implementation utilizes two dimensional detector arrays with spectral dispersion along one direction and spatial along the other. Spectral images are formed by spatially scanning across the scene (i.e., push-broom scanning). For imaging grating and prism spectrometers, the slit is perpendicular to the spatial scan direction. For spectrometers utilizing linearly variable focal-plane-mounted filters the spatial scan direction is perpendicular to the direction of spectral variation. These spectrometers share the common limitation that the number of spectral resolution elements is given by the number of pixels along the spectral (or dispersive) direction. Resolution enhancement by first passing the light input to the spectrometer through a scanned etalon or Michelson is discussed. Thus, while a detector element is scanned through a spatial resolution element of the scene, it is also temporally sampled. The analysis for all the pixels in the dispersive direction is addressed. Several specific examples are discussed. The alternate use of a Michelson for the same enhancement purpose is also discussed. Suitable for weight constrained deep space missions, hardware systems were developed including actuators, sensor, and electronics such that low-resolution etalons with performance required for implementation would weigh less than one pound.

  15. NIR small arms muzzle flash

    NASA Astrophysics Data System (ADS)

    Montoya, Joseph; Kennerly, Stephen; Rede, Edward

    2010-04-01

    Utilization of Near-Infrared (NIR) spectral features in a muzzle flash will allow for small arms detection using low cost silicon (Si)-based imagers. Detection of a small arms muzzle flash in a particular wavelength region is dependent on the intensity of that emission, the efficiency of source emission transmission through the atmosphere, and the relative intensity of the background scene. The NIR muzzle flash signature exists in the relatively large Si spectral response wavelength region of 300 nm-1100 nm, which allows for use of commercial-off-the-shelf (COTS) Si-based detectors. The alkali metal origin of the NIR spectral features in the 7.62 × 39-mm round muzzle flash is discussed, and the basis for the spectral bandwidth is examined, using a calculated Voigt profile. This report will introduce a model of the 7.62 × 39-mm NIR muzzle flash signature based on predicted source characteristics. Atmospheric limitations based on NIR spectral regions are investigated in relation to the NIR muzzle flash signature. A simple signal-to-clutter ratio (SCR) metric is used to predict sensor performance based on a model of radiance for the source and solar background and pixel registered image subtraction.

  16. Spectral Ratio Imaging with Hyperion Satellite Data for Geological Mapping

    NASA Technical Reports Server (NTRS)

    Vincent, Robert K.; Beck, Richard A.

    2005-01-01

    Since the advent of LANDSAT I in 1972, many different multispectral satellites have been orbited by the U.S. and other countries. These satellites have varied from 4 spectral bands in LANDSAT I to 14 spectral bands in the ASTER sensor aboard the TERRA space platform. Hyperion is a relatively new hyperspectral sensor with over 220 spectral bands. The huge increase in the number of spectral bands offers a substantial challenge to computers and analysts alike when it comes to the task of mapping features on the basis of chemical composition, especially if little or no ground truth is available beforehand from the area being mapped. One approach is the theoretical approach of the modeler, where all extraneous information (atmospheric attenuation, sensor electronic gain and offset, etc.) is subtracted off and divided out, and laboratory (or field) spectra of materials are used as training sets to map features in the scene of similar composition. This approach is very difficult to keep accurate because of variations in the atmosphere, solar illumination, and sensor electronic gain and offset that are not always perfectly recorded or accounted for. For instance, to apply laboratory or field spectra of materials as data sets from the theoretical approach, the header information of the files must reflect the correct, up-to-date sensor electronic gain and offset and the analyst must pick the exact atmospheric model that is appropriate for the day of data collection in order for classification procedures to accurately match pixels in the scene with the laboratory or field spectrum of a desired target on the basis of the hyperspectral data. The modeling process is so complex that it is difficult to tell when it is operating well or determine how to fix it when it is incorrect. Recently RSI has announced that the latest version of their ENVI software package is not performing atmospheric corrections correctly with the FLAASH atmospheric model. It took a long time to determine that it was wrong, and may take an equally long time (or longer) to fix.

  17. Hyperspectral data mining to identify relevant canopy spectral features for estimating durum wheat growth, nitrogen status, and yield

    USDA-ARS?s Scientific Manuscript database

    Modern hyperspectral sensors permit reflectance measurements of crop canopies in hundreds of narrow spectral wavebands. While these sensors describe plant canopy reflectance in greater detail than multispectral sensors, they also suffer from issues with data redundancy and spectral autocorrelation. ...

  18. Geometric processing workflow for vertical and oblique hyperspectral frame images collected using UAV

    NASA Astrophysics Data System (ADS)

    Markelin, L.; Honkavaara, E.; Näsi, R.; Nurminen, K.; Hakala, T.

    2014-08-01

    Remote sensing based on unmanned airborne vehicles (UAVs) is a rapidly developing field of technology. UAVs enable accurate, flexible, low-cost and multiangular measurements of 3D geometric, radiometric, and temporal properties of land and vegetation using various sensors. In this paper we present a geometric processing chain for multiangular measurement system that is designed for measuring object directional reflectance characteristics in a wavelength range of 400-900 nm. The technique is based on a novel, lightweight spectral camera designed for UAV use. The multiangular measurement is conducted by collecting vertical and oblique area-format spectral images. End products of the geometric processing are image exterior orientations, 3D point clouds and digital surface models (DSM). This data is needed for the radiometric processing chain that produces reflectance image mosaics and multiangular bidirectional reflectance factor (BRF) observations. The geometric processing workflow consists of the following three steps: (1) determining approximate image orientations using Visual Structure from Motion (VisualSFM) software, (2) calculating improved orientations and sensor calibration using a method based on self-calibrating bundle block adjustment (standard photogrammetric software) (this step is optional), and finally (3) creating dense 3D point clouds and DSMs using Photogrammetric Surface Reconstruction from Imagery (SURE) software that is based on semi-global-matching algorithm and it is capable of providing a point density corresponding to the pixel size of the image. We have tested the geometric processing workflow over various targets, including test fields, agricultural fields, lakes and complex 3D structures like forests.

  19. Radiometric Calibration of the Earth Observing System's Imaging Sensors

    NASA Technical Reports Server (NTRS)

    Slater, Philip N. (Principal Investigator)

    1997-01-01

    The work on the grant was mainly directed towards developing new, accurate, redundant methods for the in-flight, absolute radiometric calibration of satellite multispectral imaging systems and refining the accuracy of methods already in use. Initially the work was in preparation for the calibration of MODIS and HIRIS (before the development of that sensor was canceled), with the realization it would be applicable to most imaging multi- or hyper-spectral sensors provided their spatial or spectral resolutions were not too coarse. The work on the grant involved three different ground-based, in-flight calibration methods reflectance-based radiance-based and diffuse-to-global irradiance ratio used with the reflectance-based method. This continuing research had the dual advantage of: (1) developing several independent methods to create the redundancy that is essential for the identification and hopefully the elimination of systematic errors; and (2) refining the measurement techniques and algorithms that can be used not only for improving calibration accuracy but also for the reverse process of retrieving ground reflectances from calibrated remote-sensing data. The grant also provided the support necessary for us to embark on other projects such as the ratioing radiometer approach to on-board calibration (this has been further developed by SBRS as the 'solar diffuser stability monitor' and is incorporated into the most important on-board calibration system for MODIS)- another example of the work, which was a spin-off from the grant funding, was a study of solar diffuser materials. Journal citations, titles and abstracts of publications authored by faculty, staff, and students are also attached.

  20. Long-wave infrared profile feature extractor (PFx) sensor

    NASA Astrophysics Data System (ADS)

    Sartain, Ronald B.; Aliberti, Keith; Alexander, Troy; Chiu, David

    2009-05-01

    The Long Wave Infrared (LWIR) Profile Feature Extractor (PFx) sensor has evolved from the initial profiling sensor that was developed by the University of Memphis (Near IR) and the Army Research Laboratory (visible). This paper presents the initial signatures of the LWIR PFx for human with and without backpacks, human with animal (dog), and a number of other animals. The current version of the LWIR PFx sensor is a diverging optical tripwire sensor. The LWIR PFx signatures are compared to the signatures of the Profile Sensor in the visible and Near IR spectral regions. The LWIR PFx signatures were collected with two different un-cooled micro bolometer focal plane array cameras, where the individual pixels were used as stand alone detectors (a non imaging sensor). This approach results in a completely passive, much lower bandwidth, much longer battery life, low weight, small volume sensor that provides sufficient information to classify objects into human Vs non human categories with a 98.5% accuracy.

  1. Hyperspectral image reconstruction using RGB color for foodborne pathogen detection on agar plates

    NASA Astrophysics Data System (ADS)

    Yoon, Seung-Chul; Shin, Tae-Sung; Park, Bosoon; Lawrence, Kurt C.; Heitschmidt, Gerald W.

    2014-03-01

    This paper reports the latest development of a color vision technique for detecting colonies of foodborne pathogens grown on agar plates with a hyperspectral image classification model that was developed using full hyperspectral data. The hyperspectral classification model depended on reflectance spectra measured in the visible and near-infrared spectral range from 400 and 1,000 nm (473 narrow spectral bands). Multivariate regression methods were used to estimate and predict hyperspectral data from RGB color values. The six representative non-O157 Shiga-toxin producing Eschetichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) were grown on Rainbow agar plates. A line-scan pushbroom hyperspectral image sensor was used to scan 36 agar plates grown with pure STEC colonies at each plate. The 36 hyperspectral images of the agar plates were divided in half to create training and test sets. The mean Rsquared value for hyperspectral image estimation was about 0.98 in the spectral range between 400 and 700 nm for linear, quadratic and cubic polynomial regression models and the detection accuracy of the hyperspectral image classification model with the principal component analysis and k-nearest neighbors for the test set was up to 92% (99% with the original hyperspectral images). Thus, the results of the study suggested that color-based detection may be viable as a multispectral imaging solution without much loss of prediction accuracy compared to hyperspectral imaging.

  2. Detecting tents to estimate the displaced populations for post-disaster relief using high resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Wang, Shifeng; So, Emily; Smith, Pete

    2015-04-01

    Estimating the number of refugees and internally displaced persons is important for planning and managing an efficient relief operation following disasters and conflicts. Accurate estimates of refugee numbers can be inferred from the number of tents. Extracting tents from high-resolution satellite imagery has recently been suggested. However, it is still a significant challenge to extract tents automatically and reliably from remote sensing imagery. This paper describes a novel automated method, which is based on mathematical morphology, to generate a camp map to estimate the refugee numbers by counting tents on the camp map. The method is especially useful in detecting objects with a clear shape, size, and significant spectral contrast with their surroundings. Results for two study sites with different satellite sensors and different spatial resolutions demonstrate that the method achieves good performance in detecting tents. The overall accuracy can be up to 81% in this study. Further improvements should be possible if over-identified isolated single pixel objects can be filtered. The performance of the method is impacted by spectral characteristics of satellite sensors and image scenes, such as the extent of area of interest and the spatial arrangement of tents. It is expected that the image scene would have a much higher influence on the performance of the method than the sensor characteristics.

  3. Wavelength interrogation of fiber Bragg grating sensors using tapered hollow Bragg waveguides.

    PubMed

    Potts, C; Allen, T W; Azar, A; Melnyk, A; Dennison, C R; DeCorby, R G

    2014-10-15

    We describe an integrated system for wavelength interrogation, which uses tapered hollow Bragg waveguides coupled to an image sensor. Spectral shifts are extracted from the wavelength dependence of the light radiated at mode cutoff. Wavelength shifts as small as ~10  pm were resolved by employing a simple peak detection algorithm. Si/SiO₂-based cladding mirrors enable a potential operational range of several hundred nanometers in the 1550 nm wavelength region for a taper length of ~1  mm. Interrogation of a strain-tuned grating was accomplished using a broadband amplified spontaneous emission (ASE) source, and potential for single-chip interrogation of multiplexed sensor arrays is demonstrated.

  4. Processing and analysis of commercial satellite image data of the nuclear accident near Chernobyl, U. S. S. R

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadowski, F.G.; Covington, S.J.

    1987-01-01

    Advanced digital processing techniques were applied to Landsat-5 Thematic Mapper (TM) data and SPOT high-resolution visible (HRV) panchromatic data to maximize the utility of images of a nuclear power plant emergency at Chernobyl in the Soviet Ukraine. The results of the data processing and analysis illustrate the spectral and spatial capabilities of the two sensor systems and provide information about the severity and duration of the events occurring at the power plant site.

  5. AVIRIS data calibration information: Wasatch Mountains and Park City region, Utah

    USGS Publications Warehouse

    Rockwell, Barnaby W.; Clark, Roger N.; Livo, K. Eric; McDougal, Robert R.; Kokaly, Raymond F.

    2002-01-01

    This report contains information regarding the reflectance calibration of spectroscopic imagery acquired over the Wasatch Mountains and Park City region, Utah, by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor on August 5, 1998. This information was used by the USGS Spectroscopy Laboratory to calibrate the Park City AVIRIS imagery to unitless reflectance prior to spectral analysis.  The Utah AVIRIS data were analyzed as a part of the USEPA-USGS Utah Abandoned Mine Lands Imaging Spectroscopy Project.

  6. BIOME: An Ecosystem Remote Sensor Based on Imaging Interferometry

    NASA Technical Reports Server (NTRS)

    Peterson, David L.; Hammer, Philip; Smith, William H.; Lawless, James G. (Technical Monitor)

    1994-01-01

    Until recent times, optical remote sensing of ecosystem properties from space has been limited to broad band multispectral scanners such as Landsat and AVHRR. While these sensor data can be used to derive important information about ecosystem parameters, they are very limited for measuring key biogeochemical cycling parameters such as the chemical content of plant canopies. Such parameters, for example the lignin and nitrogen contents, are potentially amenable to measurements by very high spectral resolution instruments using a spectroscopic approach. Airborne sensors based on grating imaging spectrometers gave the first promise of such potential but the recent decision not to deploy the space version has left the community without many alternatives. In the past few years, advancements in high performance deep well digital sensor arrays coupled with a patented design for a two-beam interferometer has produced an entirely new design for acquiring imaging spectroscopic data at the signal to noise levels necessary for quantitatively estimating chemical composition (1000:1 at 2 microns). This design has been assembled as a laboratory instrument and the principles demonstrated for acquiring remote scenes. An airborne instrument is in production and spaceborne sensors being proposed. The instrument is extremely promising because of its low cost, lower power requirements, very low weight, simplicity (no moving parts), and high performance. For these reasons, we have called it the first instrument optimized for ecosystem studies as part of a Biological Imaging and Observation Mission to Earth (BIOME).

  7. Trends in Lightning Electrical Energy Derived from the Lightning Imaging Sensor

    NASA Astrophysics Data System (ADS)

    Bitzer, P. M.; Koshak, W. J.

    2016-12-01

    We present results detailing an emerging application of space-based measurement of lightning: the electrical energy. This is a little-used attribute of lightning data which can have applications for severe weather, lightning physics, and wildfires. In particular, we use data from the Tropical Rainfall Measuring Mission Lightning Imaging Sensor (TRMM/LIS) to find the temporal and spatial variations in the detected spectral energy density. This is used to estimate the total lightning electrical energy, following established methodologies. Results showing the trend in time of the electrical energy, as well as the distribution around the globe, will be highlighted. While flashes have been typically used in most studies, the basic scientifically-relevant measured unit by LIS is the optical group data product. This generally corresponds to a return stroke or IC pulse. We explore how the electrical energy varies per LIS group, providing an extension and comparison with previous investigations. The result is an initial climatology of this new and important application of space-based optical measurements of lightning, which can provide a baseline for future applications using the Geostationary Lightning Mapper (GLM), the European Lightning Imager (LI), and the International Space Station Lightning Imaging Sensor (ISS/LIS) instruments.

  8. In-flight spectral performance monitoring of the Airborne Prism Experiment.

    PubMed

    D'Odorico, Petra; Alberti, Edoardo; Schaepman, Michael E

    2010-06-01

    Spectral performance of an airborne dispersive pushbroom imaging spectrometer cannot be assumed to be stable over a whole flight season given the environmental stresses present during flight. Spectral performance monitoring during flight is commonly accomplished by looking at selected absorption features present in the Sun, atmosphere, or ground, and their stability. The assessment of instrument performance in two different environments, e.g., laboratory and airborne, using precisely the same calibration reference, has not been possible so far. The Airborne Prism Experiment (APEX), an airborne dispersive pushbroom imaging spectrometer, uses an onboard in-flight characterization (IFC) facility, which makes it possible to monitor the sensor's performance in terms of spectral, radiometric, and geometric stability in flight and in the laboratory. We discuss in detail a new method for the monitoring of spectral instrument performance. The method relies on the monitoring of spectral shifts by comparing instrument-induced movements of absorption features on ground and in flight. Absorption lines originate from spectral filters, which intercept the full field of view (FOV) illuminated using an internal light source. A feature-fitting algorithm is used for the shift estimation based on Pearson's correlation coefficient. Environmental parameter monitoring, coregistered on board with the image and calibration data, revealed that differential pressure and temperature in the baffle compartment are the main driving parameters explaining the trend in spectral performance deviations in the time and the space (across-track) domains, respectively. The results presented in this paper show that the system in its current setup needs further improvements to reach a stable performance. Findings provided useful guidelines for the instrument revision currently under way. The main aim of the revision is the stabilization of the instrument for a range of temperature and pressure conditions to be encountered during operation.

  9. Mapping alteration minerals at prospect, outcrop and drill core scales using imaging spectrometry

    PubMed Central

    Kruse, Fred A.; L. Bedell, Richard; Taranik, James V.; Peppin, William A.; Weatherbee, Oliver; Calvin, Wendy M.

    2011-01-01

    Imaging spectrometer data (also known as ‘hyperspectral imagery’ or HSI) are well established for detailed mineral mapping from airborne and satellite systems. Overhead data, however, have substantial additional potential when used together with ground-based measurements. An imaging spectrometer system was used to acquire airborne measurements and to image in-place outcrops (mine walls) and boxed drill core and rock chips using modified sensor-mounting configurations. Data were acquired at 5 nm nominal spectral resolution in 360 channels from 0.4 to 2.45 μm. Analysis results using standardized hyperspectral methodologies demonstrate rapid extraction of representative mineral spectra and mapping of mineral distributions and abundances in map-plan, with core depth, and on the mine walls. The examples shown highlight the capabilities of these data for mineral mapping. Integration of these approaches promotes improved understanding of relations between geology, alteration and spectral signatures in three dimensions and should lead to improved efficiency of mine development, operations and ultimately effective mine closure. PMID:25937681

  10. Mobile micro-colorimeter and micro-spectrometer sensor modules as enablers for the replacement of subjective inspections by objective measurements for optically clear colored liquids in-field

    NASA Astrophysics Data System (ADS)

    Dittrich, Paul-Gerald; Grunert, Fred; Ehehalt, Jörg; Hofmann, Dietrich

    2015-03-01

    Aim of the paper is to show that the colorimetric characterization of optically clear colored liquids can be performed with different measurement methods and their application specific multichannel spectral sensors. The possible measurement methods are differentiated by the applied types of multichannel spectral sensors and therefore by their spectral resolution, measurement speed, measurement accuracy and measurement costs. The paper describes how different types of multichannel spectral sensors are calibrated with different types of calibration methods and how the measurement values can be used for further colorimetric calculations. The different measurement methods and the different application specific calibration methods will be explained methodically and theoretically. The paper proofs that and how different multichannel spectral sensor modules with different calibration methods can be applied with smartpads for the calculation of measurement results both in laboratory and in field. A given practical example is the application of different multichannel spectral sensors for the colorimetric characterization of petroleum oils and fuels and their colorimetric characterization by the Saybolt color scale.

  11. Ocean wavenumber estimation from wave-resolving time series imagery

    USGS Publications Warehouse

    Plant, N.G.; Holland, K.T.; Haller, M.C.

    2008-01-01

    We review several approaches that have been used to estimate ocean surface gravity wavenumbers from wave-resolving remotely sensed image sequences. Two fundamentally different approaches that utilize these data exist. A power spectral density approach identifies wavenumbers where image intensity variance is maximized. Alternatively, a cross-spectral correlation approach identifies wavenumbers where intensity coherence is maximized. We develop a solution to the latter approach based on a tomographic analysis that utilizes a nonlinear inverse method. The solution is tolerant to noise and other forms of sampling deficiency and can be applied to arbitrary sampling patterns, as well as to full-frame imagery. The solution includes error predictions that can be used for data retrieval quality control and for evaluating sample designs. A quantitative analysis of the intrinsic resolution of the method indicates that the cross-spectral correlation fitting improves resolution by a factor of about ten times as compared to the power spectral density fitting approach. The resolution analysis also provides a rule of thumb for nearshore bathymetry retrievals-short-scale cross-shore patterns may be resolved if they are about ten times longer than the average water depth over the pattern. This guidance can be applied to sample design to constrain both the sensor array (image resolution) and the analysis array (tomographic resolution). ?? 2008 IEEE.

  12. Thermal imaging of Al-CuO thermites

    NASA Astrophysics Data System (ADS)

    Densmore, John; Sullivan, Kyle; Kuntz, Joshua; Gash, Alex

    2013-06-01

    We have performed spatial in-situ temperature measurements of aluminum-copper oxide thermite reactions using high-speed color pyrometry. Electrophoretic deposition was used to create thermite microstructures. Tests were performed with micron- and nano-sized particles at different stoichiometries. The color pyrometry was performed using a high-speed color camera. The color filter array on the image sensor collects light within three spectral bands. Assuming a gray-body emission spectrum a multi-wavelength ratio analysis allows a temperature to be calculated. An advantage of using a two-dimensional image sensor is that it allows heterogeneous flames to be measured with high spatial resolution. Light from the initial combustion of the Al-CuO can be differentiated from the light created by the late time oxidization with atmosphere. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  13. A LWIR hyperspectral imager using a Sagnac interferometer and cooled HgCdTe detector array

    NASA Astrophysics Data System (ADS)

    Lucey, Paul G.; Wood, Mark; Crites, Sarah T.; Akagi, Jason

    2012-06-01

    LWIR hyperspectral imaging has a wide range of civil and military applications with its ability to sense chemical compositions at standoff ranges. Most recent implementations of this technology use spectrographs employing varying degrees of cryogenic cooling to reduce sensor self-emission that can severely limit sensitivity. We have taken an interferometric approach that promises to reduce the need for cooling while preserving high resolution. Reduced cooling has multiple benefits including faster system readiness from a power off state, lower mass, and potentially lower cost owing to lower system complexity. We coupled an uncooled Sagnac interferometer with a 256x320 mercury cadmium telluride array with an 11 micron cutoff to produce a spatial interferometric LWIR hyperspectral imaging system operating from 7.5 to 11 microns. The sensor was tested in ground-ground applications, and from a small aircraft producing spectral imagery including detection of gas emission from high vapor pressure liquids.

  14. Hierarchical multi-scale approach to validation and uncertainty quantification of hyper-spectral image modeling

    NASA Astrophysics Data System (ADS)

    Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.

    2016-05-01

    Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.

  15. Modeling UV-B Effects on Primary Production Throughout the Southern Ocean Using Multi-Sensor Satellite Data

    NASA Technical Reports Server (NTRS)

    Lubin, Dan

    2001-01-01

    This study has used a combination of ocean color, backscattered ultraviolet, and passive microwave satellite data to investigate the impact of the springtime Antarctic ozone depletion on the base of the Antarctic marine food web - primary production by phytoplankton. Spectral ultraviolet (UV) radiation fields derived from the satellite data are propagated into the water column where they force physiologically-based numerical models of phytoplankton growth. This large-scale study has been divided into two components: (1) the use of Total Ozone Mapping Spectrometer (TOMS) and Special Sensor Microwave Imager (SSM/I) data in conjunction with radiative transfer theory to derive the surface spectral UV irradiance throughout the Southern Ocean; and (2) the merging of these UV irradiances with the climatology of chlorophyll derived from SeaWiFS data to specify the input data for the physiological models.

  16. Solutions Network Formulation Report. Visible/Infrared Imager/Radiometer Suite and Landsat Data Continuity Mission Simulated Data Products for the Great Lakes Basin Ecological Team

    NASA Technical Reports Server (NTRS)

    Estep, Leland

    2007-01-01

    The proposed solution would simulate VIIRS and LDCM sensor data for use in the USGS/USFWS GLBET DST. The VIIRS sensor possesses a spectral range that provides water-penetrating bands that could be used to assess water clarity on a regional spatial scale. The LDCM sensor possesses suitable spectral bands in a range of wavelengths that could be used to map water quality at finer spatial scales relative to VIIRS. Water quality, alongshore sediment transport and pollutant discharge tracking into the Great Lakes system are targeted as the primary products to be developed. A principal benefit of water quality monitoring via satellite imagery is its economy compared to field-data collection methods. Additionally, higher resolution satellite imagery provides a baseline dataset(s) against which later imagery can be overlaid in GIS-based DST programs. Further, information derived from higher resolution satellite imagery can be used to address public concerns and to confirm environmental compliance. The candidate solution supports the Public Health, Coastal Management, and Water Management National Applications.

  17. Virtual Sensors: Using Data Mining to Efficiently Estimate Spectra

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok; Oza, Nikunj; Stroeve, Julienne

    2004-01-01

    Detecting clouds within a satellite image is essential for retrieving surface geophysical parameters, such as albedo and temperature, from optical and thermal imagery because the retrieval methods tend to be valid for clear skies only. Thus, routine satellite data processing requires reliable automated cloud detection algorithms that are applicable to many surface types. Unfortunately, cloud detection over snow and ice is difficult due to the lack of spectral contrast between clouds and snow. Snow and clouds are both highly reflective in the visible wavelen,ats and often show little contrast in the thermal Infrared. However, at 1.6 microns, the spectral signatures of snow and clouds differ enough to allow improved snow/ice/cloud discrimination. The recent Terra and Aqua Moderate Resolution Imaging Spectro-Radiometer (MODIS) sensors have a channel (channel 6) at 1.6 microns. Presently the most comprehensive, long-term information on surface albedo and temperature over snow- and ice-covered surfaces comes from the Advanced Very High Resolution Radiometer ( AVHRR) sensor that has been providing imagery since July 1981. The earlier AVHRR sensors (e.g. AVHRR/2) did not however have a channel designed for discriminating clouds from snow, such as the 1.6 micron channel available on the more recent AVHRR/3 or the MODIS sensors. In the absence of the 1.6 micron channel, the AVHRR Polar Pathfinder (APP) product performs cloud detection using a combination of time-series analysis and multispectral threshold tests based on the satellite's measuring channels to produce a cloud mask. The method has been found to work reasonably well over sea ice, but not so well over the ice sheets. Thus, improving the cloud mask in the APP dataset would be extremely helpful toward increasing the accuracy of the albedo and temperature retrievals, as well as extending the time-series of albedo and temperature retrievals from the more recent sensors to the historical ones. In this work, we use data mining methods to construct a model of MODIS channel 6 as a function of other channels that are common to both MODIS and AVHRR. The idea is to use the model to generate the equivalent of MODIS channel 6 for AVHRR as a function of the AVHRR equivalents to MODIS channels. We call this a Virtual Sensor because it predicts unmeasured spectra. The goal is to use this virtual channel 6. to yield a cloud mask superior to what is currently used in APP . Our results show that several data mining methods such as multilayer perceptrons (MLPs), ensemble methods (e.g., bagging), and kernel methods (e.g., support vector machines) generate channel 6 for unseen MODIS images with high accuracy. Because the true channel 6 is not available for AVHRR images, we qualitatively assess the virtual channel 6 for several AVHRR images.

  18. Radiometric and spectral stray light correction for the portable remote imaging spectrometer (PRISM) coastal ocean sensor

    NASA Astrophysics Data System (ADS)

    Haag, Justin M.; Van Gorp, Byron E.; Mouroulis, Pantazis; Thompson, David R.

    2017-09-01

    The airborne Portable Remote Imaging Spectrometer (PRISM) instrument is based on a fast (F/1.8) Dyson spectrometer operating at 350-1050 nm and a two-mirror telescope combined with a Teledyne HyViSI 6604A detector array. Raw PRISM data contain electronic and optical artifacts that must be removed prior to radiometric calibration. We provide an overview of the process transforming raw digital numbers to calibrated radiance values. Electronic panel artifacts are first corrected using empirical relationships developed from laboratory data. The instrument spectral response functions (SRF) are reconstructed using a measurement-based optimization technique. Removal of SRF effects from the data improves retrieval of true spectra, particularly in the typically low-signal near-ultraviolet and near-infrared regions. As a final step, radiometric calibration is performed using corrected measurements of an object of known radiance. Implementation of the complete calibration procedure maximizes data quality in preparation for subsequent processing steps, such as atmospheric removal and spectral signature classification.

  19. Comparative performance between compressed and uncompressed airborne imagery

    NASA Astrophysics Data System (ADS)

    Phan, Chung; Rupp, Ronald; Agarwal, Sanjeev; Trang, Anh; Nair, Sumesh

    2008-04-01

    The US Army's RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD), Countermine Division is evaluating the compressibility of airborne multi-spectral imagery for mine and minefield detection application. Of particular interest is to assess the highest image data compression rate that can be afforded without the loss of image quality for war fighters in the loop and performance of near real time mine detection algorithm. The JPEG-2000 compression standard is used to perform data compression. Both lossless and lossy compressions are considered. A multi-spectral anomaly detector such as RX (Reed & Xiaoli), which is widely used as a core algorithm baseline in airborne mine and minefield detection on different mine types, minefields, and terrains to identify potential individual targets, is used to compare the mine detection performance. This paper presents the compression scheme and compares detection performance results between compressed and uncompressed imagery for various level of compressions. The compression efficiency is evaluated and its dependence upon different backgrounds and other factors are documented and presented using multi-spectral data.

  20. Training site statistics from Landsat and Seasat satellite imagery registered to a common map base

    NASA Technical Reports Server (NTRS)

    Clark, J.

    1981-01-01

    Landsat and Seasat satellite imagery and training site boundary coordinates were registered to a common Universal Transverse Mercator map base in the Newport Beach area of Orange County, California. The purpose was to establish a spatially-registered, multi-sensor data base which would test the use of Seasat synthetic aperture radar imagery to improve spectral separability of channels used for land use classification of an urban area. Digital image processing techniques originally developed for the digital mosaics of the California Desert and the State of Arizona were adapted to spatially register multispectral and radar data. Techniques included control point selection from imagery and USGS topographic quadrangle maps, control point cataloguing with the Image Based Information System, and spatial and spectral rectifications of the imagery. The radar imagery was pre-processed to reduce its tendency toward uniform data distributions, so that training site statistics for selected Landsat and pre-processed Seasat imagery indicated good spectral separation between channels.

Top