Theory on data processing and instrumentation. [remote sensing
NASA Technical Reports Server (NTRS)
Billingsley, F. C.
1978-01-01
A selection of NASA Earth observations programs are reviewed, emphasizing hardware capabilities. Sampling theory, noise and detection considerations, and image evaluation are discussed for remote sensor imagery. Vision and perception are considered, leading to numerical image processing. The use of multispectral scanners and of multispectral data processing systems, including digital image processing, is depicted. Multispectral sensing and analysis in application with land use and geographical data systems are also covered.
[Detecting fire smoke based on the multispectral image].
Wei, Ying-Zhuo; Zhang, Shao-Wu; Liu, Yan-Wei
2010-04-01
Smoke detection is very important for preventing forest-fire in the fire early process. Because the traditional technologies based on video and image processing are easily affected by the background dynamic information, three limitations exist in these technologies, i. e. lower anti-interference ability, higher false detection rate and the fire smoke and water fog being not easily distinguished. A novel detection method for detecting smoke based on the multispectral image was proposed in the present paper. Using the multispectral digital imaging technique, the multispectral image series of fire smoke and water fog were obtained in the band scope of 400 to 720 nm, and the images were divided into bins. The Euclidian distance among the bins was taken as a measurement for showing the difference of spectrogram. After obtaining the spectral feature vectors of dynamic region, the regions of fire smoke and water fog were extracted according to the spectrogram feature difference between target and background. The indoor and outdoor experiments show that the smoke detection method based on multispectral image can be applied to the smoke detection, which can effectively distinguish the fire smoke and water fog. Combined with video image processing method, the multispectral image detection method can also be applied to the forest fire surveillance, reducing the false alarm rate in forest fire detection.
Landsat 8 Multispectral and Pansharpened Imagery Processing on the Study of Civil Engineering Issues
NASA Astrophysics Data System (ADS)
Lazaridou, M. A.; Karagianni, A. Ch.
2016-06-01
Scientific and professional interests of civil engineering mainly include structures, hydraulics, geotechnical engineering, environment, and transportation issues. Topics included in the context of the above may concern urban environment issues, urban planning, hydrological modelling, study of hazards and road construction. Land cover information contributes significantly on the study of the above subjects. Land cover information can be acquired effectively by visual image interpretation of satellite imagery or after applying enhancement routines and also by imagery classification. The Landsat Data Continuity Mission (LDCM - Landsat 8) is the latest satellite in Landsat series, launched in February 2013. Landsat 8 medium spatial resolution multispectral imagery presents particular interest in extracting land cover, because of the fine spectral resolution, the radiometric quantization of 12bits, the capability of merging the high resolution panchromatic band of 15 meters with multispectral imagery of 30 meters as well as the policy of free data. In this paper, Landsat 8 multispectral and panchromatic imageries are being used, concerning surroundings of a lake in north-western Greece. Land cover information is extracted, using suitable digital image processing software. The rich spectral context of the multispectral image is combined with the high spatial resolution of the panchromatic image, applying image fusion - pansharpening, facilitating in this way visual image interpretation to delineate land cover. Further processing concerns supervised image classification. The classification of pansharpened image preceded multispectral image classification. Corresponding comparative considerations are also presented.
Novel instrumentation of multispectral imaging technology for detecting tissue abnormity
NASA Astrophysics Data System (ADS)
Yi, Dingrong; Kong, Linghua
2012-10-01
Multispectral imaging is becoming a powerful tool in a wide range of biological and clinical studies by adding spectral, spatial and temporal dimensions to visualize tissue abnormity and the underlying biological processes. A conventional spectral imaging system includes two physically separated major components: a band-passing selection device (such as liquid crystal tunable filter and diffraction grating) and a scientific-grade monochromatic camera, and is expensive and bulky. Recently micro-arrayed narrow-band optical mosaic filter was invented and successfully fabricated to reduce the size and cost of multispectral imaging devices in order to meet the clinical requirement for medical diagnostic imaging applications. However the challenging issue of how to integrate and place the micro filter mosaic chip to the targeting focal plane, i.e., the imaging sensor, of an off-shelf CMOS/CCD camera is not reported anywhere. This paper presents the methods and results of integrating such a miniaturized filter with off-shelf CMOS imaging sensors to produce handheld real-time multispectral imaging devices for the application of early stage pressure ulcer (ESPU) detection. Unlike conventional multispectral imaging devices which are bulky and expensive, the resulting handheld real-time multispectral ESPU detector can produce multiple images at different center wavelengths with a single shot, therefore eliminates the image registration procedure required by traditional multispectral imaging technologies.
Multispectral image fusion for target detection
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-09-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
[A spatial adaptive algorithm for endmember extraction on multispectral remote sensing image].
Zhu, Chang-Ming; Luo, Jian-Cheng; Shen, Zhan-Feng; Li, Jun-Li; Hu, Xiao-Dong
2011-10-01
Due to the problem that the convex cone analysis (CCA) method can only extract limited endmember in multispectral imagery, this paper proposed a new endmember extraction method by spatial adaptive spectral feature analysis in multispectral remote sensing image based on spatial clustering and imagery slice. Firstly, in order to remove spatial and spectral redundancies, the principal component analysis (PCA) algorithm was used for lowering the dimensions of the multispectral data. Secondly, iterative self-organizing data analysis technology algorithm (ISODATA) was used for image cluster through the similarity of the pixel spectral. And then, through clustering post process and litter clusters combination, we divided the whole image data into several blocks (tiles). Lastly, according to the complexity of image blocks' landscape and the feature of the scatter diagrams analysis, the authors can determine the number of endmembers. Then using hourglass algorithm extracts endmembers. Through the endmember extraction experiment on TM multispectral imagery, the experiment result showed that the method can extract endmember spectra form multispectral imagery effectively. What's more, the method resolved the problem of the amount of endmember limitation and improved accuracy of the endmember extraction. The method has provided a new way for multispectral image endmember extraction.
Acquisition performance of LAPAN-A3/IPB multispectral imager in real-time mode of operation
NASA Astrophysics Data System (ADS)
Hakim, P. R.; Permala, R.; Jayani, A. P. S.
2018-05-01
LAPAN-A3/IPB satellite was launched in June 2016 and its multispectral imager has been producing Indonesian coverage images. In order to improve its support for remote sensing application, the imager should produce images with high quality and quantity. To improve the quantity of LAPAN-A3/IPB multispectral image captured, image acquisition could be executed in real-time mode from LAPAN ground station in Bogor when the satellite passes west Indonesia region. This research analyses the performance of LAPAN-A3/IPB multispectral imager acquisition in real-time mode, in terms of image quality and quantity, under assumption of several on-board and ground segment limitations. Results show that with real-time operation mode, LAPAN-A3/IPB multispectral imager could produce twice as much as image coverage compare to recorded mode. However, the images produced in real-time mode will have slightly degraded quality due to image compression process involved. Based on several analyses that have been done in this research, it is recommended to use real-time acquisition mode whenever it possible, unless for some circumstances that strictly not allow any quality degradation of the images produced.
Initial clinical testing of a multi-spectral imaging system built on a smartphone platform
NASA Astrophysics Data System (ADS)
Mink, Jonah W.; Wexler, Shraga; Bolton, Frank J.; Hummel, Charles; Kahn, Bruce S.; Levitz, David
2016-03-01
Multi-spectral imaging systems are often expensive and bulky. An innovative multi-spectral imaging system was fitted onto a mobile colposcope, an imaging system built around a smartphone in order to image the uterine cervix from outside the body. The multi-spectral mobile colposcope (MSMC) acquires images at different wavelengths. This paper presents the clinical testing of MSMC imaging (technical validation of the MSMC system is described elsewhere 1 ). Patients who were referred to colposcopy following abnormal screening test (Pap or HPV DNA test) according to the standard of care were enrolled. Multi-spectral image sets of the cervix were acquired, consisting of images from the various wavelengths. Image acquisition took 1-2 sec. Areas suspected for dysplasia under white light imaging were biopsied, according to the standard of care. Biopsied sites were recorded on a clockface map of the cervix. Following the procedure, MSMC data was processed from the sites of biopsied sites. To date, the initial histopathological results are still outstanding. Qualitatively, structures in the cervical images were sharper at lower wavelengths than higher wavelengths. Patients tolerated imaging well. The result suggests MSMC holds promise for cervical imaging.
Fusion of multi-spectral and panchromatic images based on 2D-PWVD and SSIM
NASA Astrophysics Data System (ADS)
Tan, Dongjie; Liu, Yi; Hou, Ruonan; Xue, Bindang
2016-03-01
A combined method using 2D pseudo Wigner-Ville distribution (2D-PWVD) and structural similarity(SSIM) index is proposed for fusion of low resolution multi-spectral (MS) image and high resolution panchromatic (PAN) image. First, the intensity component of multi-spectral image is extracted with generalized IHS transform. Then, the spectrum diagrams of the intensity components of multi-spectral image and panchromatic image are obtained with 2D-PWVD. Different fusion rules are designed for different frequency information of the spectrum diagrams. SSIM index is used to evaluate the high frequency information of the spectrum diagrams for assigning the weights in the fusion processing adaptively. After the new spectrum diagram is achieved according to the fusion rule, the final fusion image can be obtained by inverse 2D-PWVD and inverse GIHS transform. Experimental results show that, the proposed method can obtain high quality fusion images.
NASA Technical Reports Server (NTRS)
Blonksi, Slawomir; Gasser, Gerald; Russell, Jeffrey; Ryan, Robert; Terrie, Greg; Zanoni, Vicki
2001-01-01
Multispectral data requirements for Earth science applications are not always studied rigorously studied before a new remote sensing system is designed. A study of the spatial resolution, spectral bandpasses, and radiometric sensitivity requirements of real-world applications would focus the design onto providing maximum benefits to the end-user community. To support systematic studies of multispectral data requirements, the Applications Research Toolbox (ART) has been developed at NASA's Stennis Space Center. The ART software allows users to create and assess simulated datasets while varying a wide range of system parameters. The simulations are based on data acquired by existing multispectral and hyperspectral instruments. The produced datasets can be further evaluated for specific end-user applications. Spectral synthesis of multispectral images from hyperspectral data is a key part of the ART software. In this process, hyperspectral image cubes are transformed into multispectral imagery without changes in spatial sampling and resolution. The transformation algorithm takes into account spectral responses of both the synthesized, broad, multispectral bands and the utilized, narrow, hyperspectral bands. To validate the spectral synthesis algorithm, simulated multispectral images are compared with images collected near-coincidentally by the Landsat 7 ETM+ and the EO-1 ALI instruments. Hyperspectral images acquired with the airborne AVIRIS instrument and with the Hyperion instrument onboard the EO-1 satellite were used as input data to the presented simulations.
The fusion of satellite and UAV data: simulation of high spatial resolution band
NASA Astrophysics Data System (ADS)
Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata
2017-10-01
Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.
A new multi-spectral feature level image fusion method for human interpretation
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-03-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
Multispectral laser imaging for advanced food analysis
NASA Astrophysics Data System (ADS)
Senni, L.; Burrascano, P.; Ricci, M.
2016-07-01
A hardware-software apparatus for food inspection capable of realizing multispectral NIR laser imaging at four different wavelengths is herein discussed. The system was designed to operate in a through-transmission configuration to detect the presence of unwanted foreign bodies inside samples, whether packed or unpacked. A modified Lock-In technique was employed to counterbalance the significant signal intensity attenuation due to transmission across the sample and to extract the multispectral information more efficiently. The NIR laser wavelengths used to acquire the multispectral images can be varied to deal with different materials and to focus on specific aspects. In the present work the wavelengths were selected after a preliminary analysis to enhance the image contrast between foreign bodies and food in the sample, thus identifying the location and nature of the defects. Experimental results obtained from several specimens, with and without packaging, are presented and the multispectral image processing as well as the achievable spatial resolution of the system are discussed.
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-01-01
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-05-22
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.
Hyperspectral imaging for food processing automation
NASA Astrophysics Data System (ADS)
Park, Bosoon; Lawrence, Kurt C.; Windham, William R.; Smith, Doug P.; Feldner, Peggy W.
2002-11-01
This paper presents the research results that demonstrates hyperspectral imaging could be used effectively for detecting feces (from duodenum, ceca, and colon) and ingesta on the surface of poultry carcasses, and potential application for real-time, on-line processing of poultry for automatic safety inspection. The hyperspectral imaging system included a line scan camera with prism-grating-prism spectrograph, fiber optic line lighting, motorized lens control, and hyperspectral image processing software. Hyperspectral image processing algorithms, specifically band ratio of dual-wavelength (565/517) images and thresholding were effective on the identification of fecal and ingesta contamination of poultry carcasses. A multispectral imaging system including a common aperture camera with three optical trim filters (515.4 nm with 8.6- nm FWHM), 566.4 nm with 8.8-nm FWHM, and 631 nm with 10.2-nm FWHM), which were selected and validated by a hyperspectral imaging system, was developed for a real-time, on-line application. A total image processing time required to perform the current multispectral images captured by a common aperture camera was approximately 251 msec or 3.99 frames/sec. A preliminary test shows that the accuracy of real-time multispectral imaging system to detect feces and ingesta on corn/soybean fed poultry carcasses was 96%. However, many false positive spots that cause system errors were also detected.
NASA Astrophysics Data System (ADS)
Mansoor, Awais; Robinson, J. Paul; Rajwa, Bartek
2009-02-01
Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in a compressed format. A considerable number of techniques based on interband and intraband redundancies of multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal data. However, these works have been carried out mostly in the elds of remote sensing and video processing. Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal performance of 2D compression techniques in compressing 3D images. Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image. The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands at the same level of decomposition. The insignicant quadtrees in dierent subbands in the high-frequency subband class are coded by a combined function to reduce redundancy. A number of experiments conducted on microscopic multispectral images have shown promising results for the proposed method over current state-of-the-art image-compression techniques.
NASA Astrophysics Data System (ADS)
Zabarylo, U.; Minet, O.
2010-01-01
Investigations on the application of optical procedures for the diagnosis of rheumatism using scattered light images are only at the beginning both in terms of new image-processing methods and subsequent clinical application. For semi-automatic diagnosis using laser light, the multispectral scattered light images are registered and overlapped to pseudo-coloured images, which depict diagnostically essential contents by visually highlighting pathological changes.
On-board multispectral classification study
NASA Technical Reports Server (NTRS)
Ewalt, D.
1979-01-01
The factors relating to onboard multispectral classification were investigated. The functions implemented in ground-based processing systems for current Earth observation sensors were reviewed. The Multispectral Scanner, Thematic Mapper, Return Beam Vidicon, and Heat Capacity Mapper were studied. The concept of classification was reviewed and extended from the ground-based image processing functions to an onboard system capable of multispectral classification. Eight different onboard configurations, each with varying amounts of ground-spacecraft interaction, were evaluated. Each configuration was evaluated in terms of turnaround time, onboard processing and storage requirements, geometric and classification accuracy, onboard complexity, and ancillary data required from the ground.
Multispectral Image Compression Based on DSC Combined with CCSDS-IDC
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741
Multispectral image compression based on DSC combined with CCSDS-IDC.
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.
Visual enhancement of unmixed multispectral imagery using adaptive smoothing
Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.
2004-01-01
Adaptive smoothing (AS) has been previously proposed as a method to smooth uniform regions of an image, retain contrast edges, and enhance edge boundaries. The method is an implementation of the anisotropic diffusion process which results in a gray scale image. This paper discusses modifications to the AS method for application to multi-band data which results in a color segmented image. The process was used to visually enhance the three most distinct abundance fraction images produced by the Lagrange constraint neural network learning-based unmixing of Landsat 7 Enhanced Thematic Mapper Plus multispectral sensor data. A mutual information-based method was applied to select the three most distinct fraction images for subsequent visualization as a red, green, and blue composite. A reported image restoration technique (partial restoration) was applied to the multispectral data to reduce unmixing error, although evaluation of the performance of this technique was beyond the scope of this paper. The modified smoothing process resulted in a color segmented image with homogeneous regions separated by sharpened, coregistered multiband edges. There was improved class separation with the segmented image, which has importance to subsequent operations involving data classification.
Implementation and evaluation of ILLIAC 4 algorithms for multispectral image processing
NASA Technical Reports Server (NTRS)
Swain, P. H.
1974-01-01
Data concerning a multidisciplinary and multi-organizational effort to implement multispectral data analysis algorithms on a revolutionary computer, the Illiac 4, are reported. The effectiveness and efficiency of implementing the digital multispectral data analysis techniques for producing useful land use classifications from satellite collected data were demonstrated.
USDA-ARS?s Scientific Manuscript database
Using unmanned aircraft systems (UAS) as remote sensing platforms offers the unique ability for repeated deployment for acquisition of high temporal resolution data at very high spatial resolution. Most image acquisitions from UAS have been in the visible bands, while multispectral remote sensing ap...
Applying reconfigurable hardware to the analysis of multispectral and hyperspectral imagery
NASA Astrophysics Data System (ADS)
Leeser, Miriam E.; Belanovic, Pavle; Estlick, Michael; Gokhale, Maya; Szymanski, John J.; Theiler, James P.
2002-01-01
Unsupervised clustering is a powerful technique for processing multispectral and hyperspectral images. Last year, we reported on an implementation of k-means clustering for multispectral images. Our implementation in reconfigurable hardware processed 10 channel multispectral images two orders of magnitude faster than a software implementation of the same algorithm. The advantage of using reconfigurable hardware to accelerate k-means clustering is clear; the disadvantage is the hardware implementation worked for one specific dataset. It is a non-trivial task to change this implementation to handle a dataset with different number of spectral channels, bits per spectral channel, or number of pixels; or to change the number of clusters. These changes required knowledge of the hardware design process and could take several days of a designer's time. Since multispectral data sets come in many shapes and sizes, being able to easily change the k-means implementation for these different data sets is important. For this reason, we have developed a parameterized implementation of the k-means algorithm. Our design is parameterized by the number of pixels in an image, the number of channels per pixel, and the number of bits per channel as well as the number of clusters. These parameters can easily be changed in a few minutes by someone not familiar with the design process. The resulting implementation is very close in performance to the original hardware implementation. It has the added advantage that the parameterized design compiles approximately three times faster than the original.
Photographic techniques for enhancing ERTS MSS data for geologic information
NASA Technical Reports Server (NTRS)
Yost, E.; Geluso, W.; Anderson, R.
1974-01-01
Satellite multispectral black-and-white photographic negatives of Luna County, New Mexico, obtained by ERTS on 15 August and 2 September 1973, were precisely reprocessed into positive images and analyzed in an additive color viewer. In addition, an isoluminous (uniform brightness) color rendition of the image was constructed. The isoluminous technique emphasizes subtle differences between multispectral bands by greatly enhancing the color of the superimposed composite of all bands and eliminating the effects of brightness caused by sloping terrain. Basaltic lava flows were more accurately displayed in the precision processed multispectral additive color ERTS renditions than on existing state geological maps. Malpais lava flows and small basaltic occurrences not appearing on existing geological maps were identified in ERTS multispectral color images.
NASA Astrophysics Data System (ADS)
Minet, Olaf; Scheibe, Patrick; Beuthan, Jürgen; Zabarylo, Urszula
2009-10-01
State-of-the-art image processing methods offer new possibilities for diagnosing diseases using scattered light. The optical diagnosis of rheumatism is taken as an example to show that the diagnostic sensitivity can be improved using overlapped pseudo-coloured images of different wavelengths, provided that multispectral images are recorded to compensate for any motion related artefacts which occur during examination.
NASA Astrophysics Data System (ADS)
Minet, Olaf; Scheibe, Patrick; Beuthan, Jürgen; Zabarylo, Urszula
2010-02-01
State-of-the-art image processing methods offer new possibilities for diagnosing diseases using scattered light. The optical diagnosis of rheumatism is taken as an example to show that the diagnostic sensitivity can be improved using overlapped pseudo-coloured images of different wavelengths, provided that multispectral images are recorded to compensate for any motion related artefacts which occur during examination.
Multispectral image analysis for object recognition and classification
NASA Astrophysics Data System (ADS)
Viau, C. R.; Payeur, P.; Cretu, A.-M.
2016-05-01
Computer and machine vision applications are used in numerous fields to analyze static and dynamic imagery in order to assist or automate decision-making processes. Advancements in sensor technologies now make it possible to capture and visualize imagery at various wavelengths (or bands) of the electromagnetic spectrum. Multispectral imaging has countless applications in various fields including (but not limited to) security, defense, space, medical, manufacturing and archeology. The development of advanced algorithms to process and extract salient information from the imagery is a critical component of the overall system performance. The fundamental objective of this research project was to investigate the benefits of combining imagery from the visual and thermal bands of the electromagnetic spectrum to improve the recognition rates and accuracy of commonly found objects in an office setting. A multispectral dataset (visual and thermal) was captured and features from the visual and thermal images were extracted and used to train support vector machine (SVM) classifiers. The SVM's class prediction ability was evaluated separately on the visual, thermal and multispectral testing datasets.
Development of a Portable 3CCD Camera System for Multispectral Imaging of Biological Samples
Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S.
2014-01-01
Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510
NASA Technical Reports Server (NTRS)
Blackwell, R. J.
1982-01-01
Remote sensing data analysis of water quality monitoring is evaluated. Data anaysis and image processing techniques are applied to LANDSAT remote sensing data to produce an effective operational tool for lake water quality surveying and monitoring. Digital image processing and analysis techniques were designed, developed, tested, and applied to LANDSAT multispectral scanner (MSS) data and conventional surface acquired data. Utilization of these techniques facilitates the surveying and monitoring of large numbers of lakes in an operational manner. Supervised multispectral classification, when used in conjunction with surface acquired water quality indicators, is used to characterize water body trophic status. Unsupervised multispectral classification, when interpreted by lake scientists familiar with a specific water body, yields classifications of equal validity with supervised methods and in a more cost effective manner. Image data base technology is used to great advantage in characterizing other contributing effects to water quality. These effects include drainage basin configuration, terrain slope, soil, precipitation and land cover characteristics.
Combined use of LiDAR data and multispectral earth observation imagery for wetland habitat mapping
NASA Astrophysics Data System (ADS)
Rapinel, Sébastien; Hubert-Moy, Laurence; Clément, Bernard
2015-05-01
Although wetlands play a key role in controlling flooding and nonpoint source pollution, sequestering carbon and providing an abundance of ecological services, the inventory and characterization of wetland habitats are most often limited to small areas. This explains why the understanding of their ecological functioning is still insufficient for a reliable functional assessment on areas larger than a few hectares. While LiDAR data and multispectral Earth Observation (EO) images are often used separately to map wetland habitats, their combined use is currently being assessed for different habitat types. The aim of this study is to evaluate the combination of multispectral and multiseasonal imagery and LiDAR data to precisely map the distribution of wetland habitats. The image classification was performed combining an object-based approach and decision-tree modeling. Four multispectral images with high (SPOT-5) and very high spatial resolution (Quickbird, KOMPSAT-2, aerial photographs) were classified separately. Another classification was then applied integrating summer and winter multispectral image data and three layers derived from LiDAR data: vegetation height, microtopography and intensity return. The comparison of classification results shows that some habitats are better identified on the winter image and others on the summer image (overall accuracies = 58.5 and 57.6%). They also point out that classification accuracy is highly improved (overall accuracy = 86.5%) when combining LiDAR data and multispectral images. Moreover, this study highlights the advantage of integrating vegetation height, microtopography and intensity parameters in the classification process. This article demonstrates that information provided by the synergetic use of multispectral images and LiDAR data can help in wetland functional assessment
3D tensor-based blind multispectral image decomposition for tumor demarcation
NASA Astrophysics Data System (ADS)
Kopriva, Ivica; Peršin, Antun
2010-03-01
Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).
NASA Astrophysics Data System (ADS)
Klaessens, John H. G. M.; Nelisse, Martin; Verdaasdonk, Rudolf M.; Noordmans, Herke Jan
2013-03-01
Clinical interventions can cause changes in tissue perfusion, oxygenation or temperature. Real-time imaging of these phenomena could be useful for surgical strategy or understanding of physiological regulation mechanisms. Two noncontact imaging techniques were applied for imaging of large tissue areas: LED based multispectral imaging (MSI, 17 different wavelengths 370 nm-880 nm) and thermal imaging (7.5 to 13.5 μm). Oxygenation concentration changes were calculated using different analyzing methods. The advantages of these methods are presented for stationary and dynamic applications. Concentration calculations of chromophores in tissue require right choices of wavelengths The effects of different wavelength choices for hemoglobin concentration calculations were studied in laboratory conditions and consequently applied in clinical studies. Corrections for interferences during the clinical registrations (ambient light fluctuations, tissue movements) were performed. The wavelength dependency of the algorithms were studied and wavelength sets with the best results will be presented. The multispectral and thermal imaging systems were applied during clinical intervention studies: reperfusion of tissue flap transplantation (ENT), effectiveness of local anesthetic block and during open brain surgery in patients with epileptic seizures. The LED multispectral imaging system successfully imaged the perfusion and oxygenation changes during clinical interventions. The thermal images show local heat distributions over tissue areas as a result of changes in tissue perfusion. Multispectral imaging and thermal imaging provide complementary information and are promising techniques for real-time diagnostics of physiological processes in medicine.
NASA Astrophysics Data System (ADS)
Piermattei, Livia; Bozzi, Carlo Alberto; Mancini, Adriano; Tassetti, Anna Nora; Karel, Wilfried; Pfeifer, Norbert
2017-04-01
Unmanned aerial vehicles (UAVs) in combination with consumer grade cameras have become standard tools for photogrammetric applications and surveying. The recent generation of multispectral, cost-efficient and lightweight cameras has fostered a breakthrough in the practical application of UAVs for precision agriculture. For this application, multispectral cameras typically use Green, Red, Red-Edge (RE) and Near Infrared (NIR) wavebands to capture both visible and invisible images of crops and vegetation. These bands are very effective for deriving characteristics like soil productivity, plant health and overall growth. However, the quality of results is affected by the sensor architecture, the spatial and spectral resolutions, the pattern of image collection, and the processing of the multispectral images. In particular, collecting data with multiple sensors requires an accurate spatial co-registration of the various UAV image datasets. Multispectral processed data in precision agriculture are mainly presented as orthorectified mosaics used to export information maps and vegetation indices. This work aims to investigate the acquisition parameters and processing approaches of this new type of image data in order to generate orthoimages using different sensors and UAV platforms. Within our experimental area we placed a grid of artificial targets, whose position was determined with differential global positioning system (dGPS) measurements. Targets were used as ground control points to georeference the images and as checkpoints to verify the accuracy of the georeferenced mosaics. The primary aim is to present a method for the spatial co-registration of visible, Red-Edge, and NIR image sets. To demonstrate the applicability and accuracy of our methodology, multi-sensor datasets were collected over the same area and approximately at the same time using the fixed-wing UAV senseFly "eBee". The images were acquired with the camera Canon S110 RGB, the multispectral cameras Canon S110 NIR and S110 RE and with the multi-camera system Parrot Sequoia, which is composed of single-band cameras (Green, Red, Red Edge, NIR and RGB). Imagery from each sensor was georeferenced and mosaicked with the commercial software Agisoft PhotoScan Pro and different approaches for image orientation were compared. To assess the overall spatial accuracy of each dataset the root mean square error was computed between check point coordinates measured with dGPS and coordinates retrieved from georeferenced image mosaics. Additionally, image datasets from different UAV platforms (i.e. DJI Phantom 4Pro, DJI Phantom 3 professional, and DJI Inspire 1 Pro) were acquired over the same area and the spatial accuracy of the orthoimages was evaluated.
Estimating atmospheric parameters and reducing noise for multispectral imaging
Conger, James Lynn
2014-02-25
A method and system for estimating atmospheric radiance and transmittance. An atmospheric estimation system is divided into a first phase and a second phase. The first phase inputs an observed multispectral image and an initial estimate of the atmospheric radiance and transmittance for each spectral band and calculates the atmospheric radiance and transmittance for each spectral band, which can be used to generate a "corrected" multispectral image that is an estimate of the surface multispectral image. The second phase inputs the observed multispectral image and the surface multispectral image that was generated by the first phase and removes noise from the surface multispectral image by smoothing out change in average deviations of temperatures.
NASA Cold Land Processes Experiment (CLPX 2002/03): Spaceborne remote sensing
Robert E. Davis; Thomas H. Painter; Don Cline; Richard Armstrong; Terry Haran; Kyle McDonald; Rick Forster; Kelly Elder
2008-01-01
This paper describes satellite data collected as part of the 2002/03 Cold Land Processes Experiment (CLPX). These data include multispectral and hyperspectral optical imaging, and passive and active microwave observations of the test areas. The CLPX multispectral optical data include the Advanced Very High Resolution Radiometer (AVHRR), the Landsat Thematic Mapper/...
An integrated compact airborne multispectral imaging system using embedded computer
NASA Astrophysics Data System (ADS)
Zhang, Yuedong; Wang, Li; Zhang, Xuguo
2015-08-01
An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.
Image denoising and deblurring using multispectral data
NASA Astrophysics Data System (ADS)
Semenishchev, E. A.; Voronin, V. V.; Marchuk, V. I.
2017-05-01
Currently decision-making systems get widespread. These systems are based on the analysis video sequences and also additional data. They are volume, change size, the behavior of one or a group of objects, temperature gradient, the presence of local areas with strong differences, and others. Security and control system are main areas of application. A noise on the images strongly influences the subsequent processing and decision making. This paper considers the problem of primary signal processing for solving the tasks of image denoising and deblurring of multispectral data. The additional information from multispectral channels can improve the efficiency of object classification. In this paper we use method of combining information about the objects obtained by the cameras in different frequency bands. We apply method based on simultaneous minimization L2 and the first order square difference sequence of estimates to denoising and restoring the blur on the edges. In case of loss of the information will be applied an approach based on the interpolation of data taken from the analysis of objects located in other areas and information obtained from multispectral camera. The effectiveness of the proposed approach is shown in a set of test images.
Low SWaP multispectral sensors using dichroic filter arrays
NASA Astrophysics Data System (ADS)
Dougherty, John; Varghese, Ron
2015-06-01
The benefits of multispectral imaging are well established in a variety of applications including remote sensing, authentication, satellite and aerial surveillance, machine vision, biomedical, and other scientific and industrial uses. However, many of the potential solutions require more compact, robust, and cost-effective cameras to realize these benefits. The next generation of multispectral sensors and cameras needs to deliver improvements in size, weight, power, portability, and spectral band customization to support widespread deployment for a variety of purpose-built aerial, unmanned, and scientific applications. A novel implementation uses micro-patterning of dichroic filters1 into Bayer and custom mosaics, enabling true real-time multispectral imaging with simultaneous multi-band image acquisition. Consistent with color image processing, individual spectral channels are de-mosaiced with each channel providing an image of the field of view. This approach can be implemented across a variety of wavelength ranges and on a variety of detector types including linear, area, silicon, and InGaAs. This dichroic filter array approach can also reduce payloads and increase range for unmanned systems, with the capability to support both handheld and autonomous systems. Recent examples and results of 4 band RGB + NIR dichroic filter arrays in multispectral cameras are discussed. Benefits and tradeoffs of multispectral sensors using dichroic filter arrays are compared with alternative approaches - including their passivity, spectral range, customization options, and scalable production.
IMAGE 100: The interactive multispectral image processing system
NASA Technical Reports Server (NTRS)
Schaller, E. S.; Towles, R. W.
1975-01-01
The need for rapid, cost-effective extraction of useful information from vast quantities of multispectral imagery available from aircraft or spacecraft has resulted in the design, implementation and application of a state-of-the-art processing system known as IMAGE 100. Operating on the general principle that all objects or materials possess unique spectral characteristics or signatures, the system uses this signature uniqueness to identify similar features in an image by simultaneously analyzing signatures in multiple frequency bands. Pseudo-colors, or themes, are assigned to features having identical spectral characteristics. These themes are displayed on a color CRT, and may be recorded on tape, film, or other media. The system was designed to incorporate key features such as interactive operation, user-oriented displays and controls, and rapid-response machine processing. Owing to these features, the user can readily control and/or modify the analysis process based on his knowledge of the input imagery. Effective use can be made of conventional photographic interpretation skills and state-of-the-art machine analysis techniques in the extraction of useful information from multispectral imagery. This approach results in highly accurate multitheme classification of imagery in seconds or minutes rather than the hours often involved in processing using other means.
Davis, Philip A.; Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the South Bamyan mineral district, which has areas with a spectral reflectance anomaly that require field investigation. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008),but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for South Bamyan) and the WGS84 datum. The final image mosaics for the South Bamyan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Multispectral Image Processing for Plants
NASA Technical Reports Server (NTRS)
Miles, Gaines E.
1991-01-01
The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.
D Land Cover Classification Based on Multispectral LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong
2016-06-01
Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.
Multispectral Mosaic of the Aristarchus Crater and Plateau
1998-06-03
The Aristarchus region is one of the most diverse and interesting areas on the Moon. About 500 images from NASA's Clementine spacecraft were processed and combined into a multispectral mosaic of this region. http://photojournal.jpl.nasa.gov/catalog/PIA00090
NASA Technical Reports Server (NTRS)
1973-01-01
Topics discussed include the management and processing of earth resources information, special-purpose processors for the machine processing of remotely sensed data, digital image registration by a mathematical programming technique, the use of remote-sensor data in land classification (in particular, the use of ERTS-1 multispectral scanning data), the use of remote-sensor data in geometrical transformations and mapping, earth resource measurement with the aid of ERTS-1 multispectral scanning data, the use of remote-sensor data in the classification of turbidity levels in coastal zones and in the identification of ecological anomalies, the problem of feature selection and the classification of objects in multispectral images, the estimation of proportions of certain categories of objects, and a number of special systems and techniques. Individual items are announced in this issue.
On-line object feature extraction for multispectral scene representation
NASA Technical Reports Server (NTRS)
Ghassemian, Hassan; Landgrebe, David
1988-01-01
A new on-line unsupervised object-feature extraction method is presented that reduces the complexity and costs associated with the analysis of the multispectral image data and data transmission, storage, archival and distribution. The ambiguity in the object detection process can be reduced if the spatial dependencies, which exist among the adjacent pixels, are intelligently incorporated into the decision making process. The unity relation was defined that must exist among the pixels of an object. Automatic Multispectral Image Compaction Algorithm (AMICA) uses the within object pixel-feature gradient vector as a valuable contextual information to construct the object's features, which preserve the class separability information within the data. For on-line object extraction the path-hypothesis and the basic mathematical tools for its realization are introduced in terms of a specific similarity measure and adjacency relation. AMICA is applied to several sets of real image data, and the performance and reliability of features is evaluated.
NASA Technical Reports Server (NTRS)
1982-01-01
The format of the HDT-AM product which contains partially processed LANDSAT D and D Prime multispectral scanner image data is defined. Recorded-data formats, tape format, and major frame types are described.
Retinex Preprocessing for Improved Multi-Spectral Image Classification
NASA Technical Reports Server (NTRS)
Thompson, B.; Rahman, Z.; Park, S.
2000-01-01
The goal of multi-image classification is to identify and label "similar regions" within a scene. The ability to correctly classify a remotely sensed multi-image of a scene is affected by the ability of the classification process to adequately compensate for the effects of atmospheric variations and sensor anomalies. Better classification may be obtained if the multi-image is preprocessed before classification, so as to reduce the adverse effects of image formation. In this paper, we discuss the overall impact on multi-spectral image classification when the retinex image enhancement algorithm is used to preprocess multi-spectral images. The retinex is a multi-purpose image enhancement algorithm that performs dynamic range compression, reduces the dependence on lighting conditions, and generally enhances apparent spatial resolution. The retinex has been successfully applied to the enhancement of many different types of grayscale and color images. We show in this paper that retinex preprocessing improves the spatial structure of multi-spectral images and thus provides better within-class variations than would otherwise be obtained without the preprocessing. For a series of multi-spectral images obtained with diffuse and direct lighting, we show that without retinex preprocessing the class spectral signatures vary substantially with the lighting conditions. Whereas multi-dimensional clustering without preprocessing produced one-class homogeneous regions, the classification on the preprocessed images produced multi-class non-homogeneous regions. This lack of homogeneity is explained by the interaction between different agronomic treatments applied to the regions: the preprocessed images are closer to ground truth. The principle advantage that the retinex offers is that for different lighting conditions classifications derived from the retinex preprocessed images look remarkably "similar", and thus more consistent, whereas classifications derived from the original images, without preprocessing, are much less similar.
Enhancing hyperspectral spatial resolution using multispectral image fusion: A wavelet approach
NASA Astrophysics Data System (ADS)
Jazaeri, Amin
High spectral and spatial resolution images have a significant impact in remote sensing applications. Because both spatial and spectral resolutions of spaceborne sensors are fixed by design and it is not possible to further increase the spatial or spectral resolution, techniques such as image fusion must be applied to achieve such goals. This dissertation introduces the concept of wavelet fusion between hyperspectral and multispectral sensors in order to enhance the spectral and spatial resolution of a hyperspectral image. To test the robustness of this concept, images from Hyperion (hyperspectral sensor) and Advanced Land Imager (multispectral sensor) were first co-registered and then fused using different wavelet algorithms. A regression-based fusion algorithm was also implemented for comparison purposes. The results show that the fused images using a combined bi-linear wavelet-regression algorithm have less error than other methods when compared to the ground truth. In addition, a combined regression-wavelet algorithm shows more immunity to misalignment of the pixels due to the lack of proper registration. The quantitative measures of average mean square error show that the performance of wavelet-based methods degrades when the spatial resolution of hyperspectral images becomes eight times less than its corresponding multispectral image. Regardless of what method of fusion is utilized, the main challenge in image fusion is image registration, which is also a very time intensive process. Because the combined regression wavelet technique is computationally expensive, a hybrid technique based on regression and wavelet methods was also implemented to decrease computational overhead. However, the gain in faster computation was offset by the introduction of more error in the outcome. The secondary objective of this dissertation is to examine the feasibility and sensor requirements for image fusion for future NASA missions in order to be able to perform onboard image fusion. In this process, the main challenge of image registration was resolved by registering the input images using transformation matrices of previously acquired data. The composite image resulted from the fusion process remarkably matched the ground truth, indicating the possibility of real time onboard fusion processing.
Nanohole-array-based device for 2D snapshot multispectral imaging
Najiminaini, Mohamadreza; Vasefi, Fartash; Kaminska, Bozena; Carson, Jeffrey J. L.
2013-01-01
We present a two-dimensional (2D) snapshot multispectral imager that utilizes the optical transmission characteristics of nanohole arrays (NHAs) in a gold film to resolve a mixture of input colors into multiple spectral bands. The multispectral device consists of blocks of NHAs, wherein each NHA has a unique periodicity that results in transmission resonances and minima in the visible and near-infrared regions. The multispectral device was illuminated over a wide spectral range, and the transmission was spectrally unmixed using a least-squares estimation algorithm. A NHA-based multispectral imaging system was built and tested in both reflection and transmission modes. The NHA-based multispectral imager was capable of extracting 2D multispectral images representative of four independent bands within the spectral range of 662 nm to 832 nm for a variety of targets. The multispectral device can potentially be integrated into a variety of imaging sensor systems. PMID:24005065
Parallel evolution of image processing tools for multispectral imagery
NASA Astrophysics Data System (ADS)
Harvey, Neal R.; Brumby, Steven P.; Perkins, Simon J.; Porter, Reid B.; Theiler, James P.; Young, Aaron C.; Szymanski, John J.; Bloch, Jeffrey J.
2000-11-01
We describe the implementation and performance of a parallel, hybrid evolutionary-algorithm-based system, which optimizes image processing tools for feature-finding tasks in multi-spectral imagery (MSI) data sets. Our system uses an integrated spatio-spectral approach and is capable of combining suitably-registered data from different sensors. We investigate the speed-up obtained by parallelization of the evolutionary process via multiple processors (a workstation cluster) and develop a model for prediction of run-times for different numbers of processors. We demonstrate our system on Landsat Thematic Mapper MSI , covering the recent Cerro Grande fire at Los Alamos, NM, USA.
NASA Technical Reports Server (NTRS)
Jaggi, S.
1993-01-01
A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.
Terrain type recognition using ERTS-1 MSS images
NASA Technical Reports Server (NTRS)
Gramenopoulos, N.
1973-01-01
For the automatic recognition of earth resources from ERTS-1 digital tapes, both multispectral and spatial pattern recognition techniques are important. Recognition of terrain types is based on spatial signatures that become evident by processing small portions of an image through selected algorithms. An investigation of spatial signatures that are applicable to ERTS-1 MSS images is described. Artifacts in the spatial signatures seem to be related to the multispectral scanner. A method for suppressing such artifacts is presented. Finally, results of terrain type recognition for one ERTS-1 image are presented.
Experiences with digital processing of images at INPE
NASA Technical Reports Server (NTRS)
Mascarenhas, N. D. A. (Principal Investigator)
1984-01-01
Four different research experiments with digital image processing at INPE will be described: (1) edge detection by hypothesis testing; (2) image interpolation by finite impulse response filters; (3) spatial feature extraction methods in multispectral classification; and (4) translational image registration by sequential tests of hypotheses.
Development of online lines-scan imaging system for chicken inspection and differentiation
NASA Astrophysics Data System (ADS)
Yang, Chun-Chieh; Chan, Diane E.; Chao, Kuanglin; Chen, Yud-Ren; Kim, Moon S.
2006-10-01
An online line-scan imaging system was developed for differentiation of wholesome and systemically diseased chickens. The hyperspectral imaging system used in this research can be directly converted to multispectral operation and would provide the ideal implementation of essential features for data-efficient high-speed multispectral classification algorithms. The imaging system consisted of an electron-multiplying charge-coupled-device (EMCCD) camera and an imaging spectrograph for line-scan images. The system scanned the surfaces of chicken carcasses on an eviscerating line at a poultry processing plant in December 2005. A method was created to recognize birds entering and exiting the field of view, and to locate a Region of Interest on the chicken images from which useful spectra were extracted for analysis. From analysis of the difference spectra between wholesome and systemically diseased chickens, four wavelengths of 468 nm, 501 nm, 582 nm and 629 nm were selected as key wavelengths for differentiation. The method of locating the Region of Interest will also have practical application in multispectral operation of the line-scan imaging system for online chicken inspection. This line-scan imaging system makes possible the implementation of multispectral inspection using the key wavelengths determined in this study with minimal software adaptations and without the need for cross-system calibration.
Quality evaluation of pansharpened hyperspectral images generated using multispectral images
NASA Astrophysics Data System (ADS)
Matsuoka, Masayuki; Yoshioka, Hiroki
2012-11-01
Hyperspectral remote sensing can provide a smooth spectral curve of a target by using a set of higher spectral resolution detectors. The spatial resolution of the hyperspectral images, however, is generally much lower than that of multispectral images due to the lower energy of incident radiation. Pansharpening is an image-fusion technique that generates higher spatial resolution multispectral images by combining lower resolution multispectral images with higher resolution panchromatic images. In this study, higher resolution hyperspectral images were generated by pansharpening of simulated lower hyperspectral and higher multispectral data. Spectral and spatial qualities of pansharpened images, then, were accessed in relation to the spectral bands of multispectral images. Airborne hyperspectral data of AVIRIS was used in this study, and it was pansharpened using six methods. Quantitative evaluations of pansharpened image are achieved using two frequently used indices, ERGAS, and the Q index.
Adaptive Image Processing Methods for Improving Contaminant Detection Accuracy on Poultry Carcasses
USDA-ARS?s Scientific Manuscript database
Technical Abstract A real-time multispectral imaging system has demonstrated a science-based tool for fecal and ingesta contaminant detection during poultry processing. In order to implement this imaging system at commercial poultry processing industry, the false positives must be removed. For doi...
SPEKTROP DPU: optoelectronic platform for fast multispectral imaging
NASA Astrophysics Data System (ADS)
Graczyk, Rafal; Sitek, Piotr; Stolarski, Marcin
2010-09-01
In recent years it easy to spot and increasing need of high-quality Earth imaging in airborne and space applications. This is due fact that government and local authorities urge for up to date topological data for administrative purposes. On the other hand, interest in environmental sciences, push for ecological approach, efficient agriculture and forests management are also heavily supported by Earth images in various resolutions and spectral ranges. "SPEKTROP DPU: Opto-electronic platform for fast multi-spectral imaging" paper describes architectural datails of data processing unit, part of universal and modular platform that provides high quality imaging functionality in aerospace applications.
Band co-registration modeling of LAPAN-A3/IPB multispectral imager based on satellite attitude
NASA Astrophysics Data System (ADS)
Hakim, P. R.; Syafrudin, A. H.; Utama, S.; Jayani, A. P. S.
2018-05-01
One of significant geometric distortion on images of LAPAN-A3/IPB multispectral imager is co-registration error between each color channel detector. Band co-registration distortion usually can be corrected by using several approaches, which are manual method, image matching algorithm, or sensor modeling and calibration approach. This paper develops another approach to minimize band co-registration distortion on LAPAN-A3/IPB multispectral image by using supervised modeling of image matching with respect to satellite attitude. Modeling results show that band co-registration error in across-track axis is strongly influenced by yaw angle, while error in along-track axis is fairly influenced by both pitch and roll angle. Accuracy of the models obtained is pretty good, which lies between 1-3 pixels error for each axis of each pair of band co-registration. This mean that the model can be used to correct the distorted images without the need of slower image matching algorithm, nor the laborious effort needed in manual approach and sensor calibration. Since the calculation can be executed in order of seconds, this approach can be used in real time quick-look image processing in ground station or even in satellite on-board image processing.
Davis, Philip A.; Cagney, Laura E.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Balkhab mineral district, which has copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Balkhab) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the Balkhab area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Balkhab study area, one subarea was designated for detailed field investigations (that is, the Balkhab Prospect subarea); this subarea was extracted from the area's image mosaic and is provided as separate embedded geotiff images.
Image-algebraic design of multispectral target recognition algorithms
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.
1994-06-01
In this paper, we discuss methods for multispectral ATR (Automated Target Recognition) of small targets that are sensed under suboptimal conditions, such as haze, smoke, and low light levels. In particular, we discuss our ongoing development of algorithms and software that effect intelligent object recognition by selecting ATR filter parameters according to ambient conditions. Our algorithms are expressed in terms of IA (image algebra), a concise, rigorous notation that unifies linear and nonlinear mathematics in the image processing domain. IA has been implemented on a variety of parallel computers, with preprocessors available for the Ada and FORTRAN languages. An image algebra C++ class library has recently been made available. Thus, our algorithms are both feasible implementationally and portable to numerous machines. Analyses emphasize the aspects of image algebra that aid the design of multispectral vision algorithms, such as parameterized templates that facilitate the flexible specification of ATR filters.
Davis, Philip A.; Cagney, Laura E.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Katawas mineral district, which has gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©AXA, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Katawas) and the WGS84 datum. The final image mosaics are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Katawas study area, one subarea was designated for detailed field investigation (that is, the Gold subarea); this subarea was extracted from the area's image mosaic and is provided as a separate embedded geotiff image.
Davis, Philip A.; Cagney, Laura E.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the North Takhar mineral district, which has placer gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for North Takhar) and the WGS84 datum. The final image mosaics were subdivided into nine overlapping tiles or quadrants because of the large size of the target area. The nine image tiles (or quadrants) for the North Takhar area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Cagney, Laura E.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Baghlan mineral district, which has industrial clay and gypsum deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2006, 2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Baghlan) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the Baghlan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Uruzgan mineral district, which has tin and tungsten deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2008, 2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Uruzgan) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Uruzgan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Cagney, Laura E.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the South Helmand mineral district, which has travertine deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2008, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (41 for South Helmand) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the South Helmand area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Cagney, Laura E.; Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Bakhud mineral district, which has industrial fluorite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (41 for Bakhud) and the WGS84 datum. The final image mosaics were subdivided into nine overlapping tiles or quadrants because of the large size of the target area. The nine image tiles (or quadrants) for the Bakhud area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
A methodology for evaluation of an interactive multispectral image processing system
NASA Technical Reports Server (NTRS)
Kovalick, William M.; Newcomer, Jeffrey A.; Wharton, Stephen W.
1987-01-01
Because of the considerable cost of an interactive multispectral image processing system, an evaluation of a prospective system should be performed to ascertain if it will be acceptable to the anticipated users. Evaluation of a developmental system indicated that the important system elements include documentation, user friendliness, image processing capabilities, and system services. The criteria and evaluation procedures for these elements are described herein. The following factors contributed to the success of the evaluation of the developmental system: (1) careful review of documentation prior to program development, (2) construction and testing of macromodules representing typical processing scenarios, (3) availability of other image processing systems for referral and verification, and (4) use of testing personnel with an applications perspective and experience with other systems. This evaluation was done in addition to and independently of program testing by the software developers of the system.
Adaptive illumination source for multispectral vision system applied to material discrimination
NASA Astrophysics Data System (ADS)
Conde, Olga M.; Cobo, Adolfo; Cantero, Paulino; Conde, David; Mirapeix, Jesús; Cubillas, Ana M.; López-Higuera, José M.
2008-04-01
A multispectral system based on a monochrome camera and an adaptive illumination source is presented in this paper. Its preliminary application is focused on material discrimination for food and beverage industries, where monochrome, color and infrared imaging have been successfully applied for this task. This work proposes a different approach, in which the relevant wavelengths for the required discrimination task are selected in advance using a Sequential Forward Floating Selection (SFFS) Algorithm. A light source, based on Light Emitting Diodes (LEDs) at these wavelengths is then used to sequentially illuminate the material under analysis, and the resulting images are captured by a CCD camera with spectral response in the entire range of the selected wavelengths. Finally, the several multispectral planes obtained are processed using a Spectral Angle Mapping (SAM) algorithm, whose output is the desired material classification. Among other advantages, this approach of controlled and specific illumination produces multispectral imaging with a simple monochrome camera, and cold illumination restricted to specific relevant wavelengths, which is desirable for the food and beverage industry. The proposed system has been tested with success for the automatic detection of foreign object in the tobacco processing industry.
Davis, Philip A.; Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the North Bamyan mineral district, which has copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for North Bamyan) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the North Bamyan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Cagney, Laura E.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Takhar mineral district, which has industrial evaporite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Takhar) and the WGS84 datum. The final image mosaics for the Takhar area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Parwan mineral district, which has gold and copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006, 2007), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Parwan) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the North Bamyan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.
2014-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ghazni2 mineral district, which has spectral reflectance anomalies indicative of gold, mercury, and sulfur deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA, 2008, 2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Ghazni2) and the WGS84 datum. The images for the Ghazni2 area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ahankashan mineral district, which has copper and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008, 2009, 2010),but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Ahankashan) and the WGS84 datum. The final image mosaics were subdivided into five overlapping tiles or quadrants because of the large size of the target area. The five image tiles (or quadrants) for the Ahankashan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.
2014-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ghazni1 mineral district, which has spectral reflectance anomalies indicative of clay, aluminum, gold, silver, mercury, and sulfur deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA, 2008, 2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Ghazni1) and the WGS84 datum. The images for the Ghazni1 area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
NASA Technical Reports Server (NTRS)
Lovegreen, J. R.; Prosser, W. J.; Millet, R. A.
1975-01-01
A site in the Great Valley subsection of the Valley and Ridge physiographic province in eastern Pennsylvania was studied to evaluate the use of digital and analog image processing for geologic investigations. Ground truth at the site was obtained by a field mapping program, a subsurface exploration investigation and a review of available published and unpublished literature. Remote sensing data were analyzed using standard manual techniques. LANDSAT-1 imagery was analyzed using digital image processing employing the multispectral Image 100 system and using analog color processing employing the VP-8 image analyzer. This study deals primarily with linears identified employing image processing and correlation of these linears with known structural features and with linears identified manual interpretation; and the identification of rock outcrops in areas of extensive vegetative cover employing image processing. The results of this study indicate that image processing can be a cost-effective tool for evaluating geologic and linear features for regional studies encompassing large areas such as for power plant siting. Digital image processing can be an effective tool for identifying rock outcrops in areas of heavy vegetative cover.
NASA Astrophysics Data System (ADS)
Dong, Yang; He, Honghui; He, Chao; Ma, Hui
2017-02-01
Mueller matrix polarimetry is a powerful tool for detecting microscopic structures, therefore can be used to monitor physiological changes of tissue samples. Meanwhile, spectral features of scattered light can also provide abundant microstructural information of tissues. In this paper, we take the 2D multispectral backscattering Mueller matrix images of bovine skeletal muscle tissues, and analyze their temporal variation behavior using multispectral Mueller matrix parameters. The 2D images of the Mueller matrix elements are reduced to the multispectral frequency distribution histograms (mFDHs) to reveal the dominant structural features of the muscle samples more clearly. For quantitative analysis, the multispectral Mueller matrix transformation (MMT) parameters are calculated to characterize the microstructural variations during the rigor mortis and proteolysis processes of the skeletal muscle tissue samples. The experimental results indicate that the multispectral MMT parameters can be used to judge different physiological stages for bovine skeletal muscle tissues in 24 hours, and combining with the multispectral technique, the Mueller matrix polarimetry and FDH analysis can monitor the microstructural variation features of skeletal muscle samples. The techniques may be used for quick assessment and quantitative monitoring of meat qualities in food industry.
Real-time hyperspectral imaging for food safety applications
USDA-ARS?s Scientific Manuscript database
Multispectral imaging systems with selected bands can commonly be used for real-time applications of food processing. Recent research has demonstrated several image processing methods including binning, noise removal filter, and appropriate morphological analysis in real-time mode can remove most fa...
Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking †
Kiku, Daisuke; Okutomi, Masatoshi
2017-01-01
Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking. PMID:29194407
Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking.
Monno, Yusuke; Kiku, Daisuke; Tanaka, Masayuki; Okutomi, Masatoshi
2017-12-01
Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking.
NASA Technical Reports Server (NTRS)
Allen, Carlton; Jakes, Petr; Jaumann, Ralf; Marshall, John; Moses, Stewart; Ryder, Graham; Saunders, Stephen; Singer, Robert
1996-01-01
The field geology/process group examined the basic operations of a terrestrial field geologist and the manner in which these operations could be transferred to a planetary lander. Four basic requirements for robotic field geology were determined: geologic content; surface vision; mobility; and manipulation. Geologic content requires a combination of orbital and descent imaging. Surface vision requirements include range, resolution, stereo, and multispectral imaging. The minimum mobility for useful field geology depends on the scale of orbital imagery. Manipulation requirements include exposing unweathered surfaces, screening samples, and bringing samples in contact with analytical instruments. To support these requirements, several advanced capabilities for future development are recommended. Capabilities include near-infrared reflectance spectroscopy, hyper-spectral imaging, multispectral microscopy, artificial intelligence in support of imaging, x ray diffraction, x ray fluorescence, and rock chipping.
Processing of multispectral thermal IR data for geologic applications
NASA Technical Reports Server (NTRS)
Kahle, A. B.; Madura, D. P.; Soha, J. M.
1979-01-01
Multispectral thermal IR data were acquired with a 24-channel scanner flown in an aircraft over the E. Tintic Utah mining district. These digital image data required extensive computer processing in order to put the information into a format useful for a geologic photointerpreter. Simple enhancement procedures were not sufficient to reveal the total information content because the data were highly correlated in all channels. The data were shown to be dominated by temperature variations across the scene, while the much more subtle spectral variations between the different rock types were of interest. The image processing techniques employed to analyze these data are described.
Feature extraction from multiple data sources using genetic programming
NASA Astrophysics Data System (ADS)
Szymanski, John J.; Brumby, Steven P.; Pope, Paul A.; Eads, Damian R.; Esch-Mosher, Diana M.; Galassi, Mark C.; Harvey, Neal R.; McCulloch, Hersey D.; Perkins, Simon J.; Porter, Reid B.; Theiler, James P.; Young, Aaron C.; Bloch, Jeffrey J.; David, Nancy A.
2002-08-01
Feature extraction from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. We use the GENetic Imagery Exploitation (GENIE) software for this purpose, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land cover features including towns, wildfire burnscars, and forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szymanski, J. J.; Brumby, Steven P.; Pope, P. A.
Feature extration from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. The tool used is the GENetic Imagery Exploitation (GENIE) software, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniquesmore » to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land-cover features including towns, grasslands, wild fire burn scars, and several types of forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.« less
NASA Technical Reports Server (NTRS)
Blonski, Slawomir; Glasser, Gerald; Russell, Jeffrey; Ryan, Robert; Terrie, Greg; Zanoni, Vicki
2003-01-01
Spectral band synthesis is a key step in the process of creating a simulated multispectral image from hyperspectral data. In this step, narrow hyperspectral bands are combined into broader multispectral bands. Such an approach has been used quite often, but to the best of our knowledge accuracy of the band synthesis simulations has not been evaluated thus far. Therefore, the main goal of this paper is to provide validation of the spectral band synthesis algorithm used in the ART software. The next section contains a description of the algorithm and an example of its application. Using spectral responses of AVIRIS, Hyperion, ALI, and ETM+, the following section shows how the synthesized spectral bands compare with actual bands, and it presents an evaluation of the simulation accuracy based on results of MODTRAN modeling. In the final sections of the paper, simulated images are compared with data acquired by actual satellite sensors. First, a Landsat 7 ETM+ image is simulated using an AVIRIS hyperspectral data cube. Then, two datasets collected with the Hyperion instrument from the EO-1 satellite are used to simulate multispectral images from the ALI and ETM+ sensors.
NASA Astrophysics Data System (ADS)
Neukum, Gerhard; Jaumann, Ralf; Scholten, Frank; Gwinner, Klaus
2017-11-01
At the Institute of Space Sensor Technology and Planetary Exploration of the German Aerospace Center (DLR) the High Resolution Stereo Camera (HRSC) has been designed for international missions to planet Mars. For more than three years an airborne version of this camera, the HRSC-A, has been successfully applied in many flight campaigns and in a variety of different applications. It combines 3D-capabilities and high resolution with multispectral data acquisition. Variable resolutions depending on the camera control settings can be generated. A high-end GPS/INS system in combination with the multi-angle image information yields precise and high-frequent orientation data for the acquired image lines. In order to handle these data a completely automated photogrammetric processing system has been developed, and allows to generate multispectral 3D-image products for large areas and with accuracies for planimetry and height in the decimeter range. This accuracy has been confirmed by detailed investigations.
NASA Astrophysics Data System (ADS)
Chen, Maomao; Zhou, Yuan; Su, Han; Zhang, Dong; Luo, Jianwen
2017-04-01
Imaging of the pharmacokinetic parameters in dynamic fluorescence molecular tomography (DFMT) can provide three-dimensional metabolic information for biological studies and drug development. However, owing to the ill-posed nature of the FMT inverse problem, the relatively low quality of the parametric images makes it difficult to investigate the different metabolic processes of the fluorescent targets with small distances. An excitation-resolved multispectral DFMT method is proposed; it is based on the fact that the fluorescent targets with different concentrations show different variations in the excitation spectral domain and can be considered independent signal sources. With an independent component analysis method, the spatial locations of different fluorescent targets can be decomposed, and the fluorescent yields of the targets at different time points can be recovered. Therefore, the metabolic process of each component can be independently investigated. Simulations and phantom experiments are carried out to evaluate the performance of the proposed method. The results demonstrated that the proposed excitation-resolved multispectral method can effectively improve the reconstruction accuracy of the parametric images in DFMT.
Zhang, Dongyan; Zhou, Xingen; Zhang, Jian; Lan, Yubin; Xu, Chao; Liang, Dong
2018-01-01
Detection and monitoring are the first essential step for effective management of sheath blight (ShB), a major disease in rice worldwide. Unmanned aerial systems have a high potential of being utilized to improve this detection process since they can reduce the time needed for scouting for the disease at a field scale, and are affordable and user-friendly in operation. In this study, a commercialized quadrotor unmanned aerial vehicle (UAV), equipped with digital and multispectral cameras, was used to capture imagery data of research plots with 67 rice cultivars and elite lines. Collected imagery data were then processed and analyzed to characterize the development of ShB and quantify different levels of the disease in the field. Through color features extraction and color space transformation of images, it was found that the color transformation could qualitatively detect the infected areas of ShB in the field plots. However, it was less effective to detect different levels of the disease. Five vegetation indices were then calculated from the multispectral images, and ground truths of disease severity and GreenSeeker measured NDVI (Normalized Difference Vegetation Index) were collected. The results of relationship analyses indicate that there was a strong correlation between ground-measured NDVIs and image-extracted NDVIs with the R2 of 0.907 and the root mean square error (RMSE) of 0.0854, and a good correlation between image-extracted NDVIs and disease severity with the R2 of 0.627 and the RMSE of 0.0852. Use of image-based NDVIs extracted from multispectral images could quantify different levels of ShB in the field plots with an accuracy of 63%. These results demonstrate that a customer-grade UAV integrated with digital and multispectral cameras can be an effective tool to detect the ShB disease at a field scale.
Gimbaled multispectral imaging system and method
Brown, Kevin H.; Crollett, Seferino; Henson, Tammy D.; Napier, Matthew; Stromberg, Peter G.
2016-01-26
A gimbaled multispectral imaging system and method is described herein. In an general embodiment, the gimbaled multispectral imaging system has a cross support that defines a first gimbal axis and a second gimbal axis, wherein the cross support is rotatable about the first gimbal axis. The gimbaled multispectral imaging system comprises a telescope that fixed to an upper end of the cross support, such that rotation of the cross support about the first gimbal axis causes the tilt of the telescope to alter. The gimbaled multispectral imaging system includes optics that facilitate on-gimbal detection of visible light and off-gimbal detection of infrared light.
The development of a specialized processor for a space-based multispectral earth imager
NASA Astrophysics Data System (ADS)
Khedr, Mostafa E.
2008-10-01
This work was done in the Department of Computer Engineering, Lvov Polytechnic National University, Lvov, Ukraine, as a thesis entitled "Space Imager Computer System for Raw Video Data Processing" [1]. This work describes the synthesis and practical implementation of a specialized computer system for raw data control and processing onboard a satellite MultiSpectral earth imager. This computer system is intended for satellites with resolution in the range of one meter with 12-bit precession. The design is based mostly on general off-the-shelf components such as (FPGAs) plus custom designed software for interfacing with PC and test equipment. The designed system was successfully manufactured and now fully functioning in orbit.
CMOS Time-Resolved, Contact, and Multispectral Fluorescence Imaging for DNA Molecular Diagnostics
Guo, Nan; Cheung, Ka Wai; Wong, Hiu Tung; Ho, Derek
2014-01-01
Instrumental limitations such as bulkiness and high cost prevent the fluorescence technique from becoming ubiquitous for point-of-care deoxyribonucleic acid (DNA) detection and other in-field molecular diagnostics applications. The complimentary metal-oxide-semiconductor (CMOS) technology, as benefited from process scaling, provides several advanced capabilities such as high integration density, high-resolution signal processing, and low power consumption, enabling sensitive, integrated, and low-cost fluorescence analytical platforms. In this paper, CMOS time-resolved, contact, and multispectral imaging are reviewed. Recently reported CMOS fluorescence analysis microsystem prototypes are surveyed to highlight the present state of the art. PMID:25365460
Quality assessment of butter cookies applying multispectral imaging
Andresen, Mette S; Dissing, Bjørn S; Løje, Hanne
2013-01-01
A method for characterization of butter cookie quality by assessing the surface browning and water content using multispectral images is presented. Based on evaluations of the browning of butter cookies, cookies were manually divided into groups. From this categorization, reference values were calculated for a statistical prediction model correlating multispectral images with a browning score. The browning score is calculated as a function of oven temperature and baking time. It is presented as a quadratic response surface. The investigated process window was the intervals 4–16 min and 160–200°C in a forced convection electrically heated oven. In addition to the browning score, a model for predicting the average water content based on the same images is presented. This shows how multispectral images of butter cookies may be used for the assessment of different quality parameters. Statistical analysis showed that the most significant wavelengths for browning predictions were in the interval 400–700 nm and the wavelengths significant for water prediction were primarily located in the near-infrared spectrum. The water prediction model was found to correctly estimate the average water content with an absolute error of 0.22%. From the images it was also possible to follow the browning and drying propagation from the cookie edge toward the center. PMID:24804036
The trophic classification of lakes using ERTS multispectral scanner data
NASA Technical Reports Server (NTRS)
Blackwell, R. J.; Boland, D. H.
1975-01-01
Lake classification methods based on the use of ERTS data are described. Preliminary classification results obtained by multispectral and digital image processing techniques indicate satisfactory correlation between ERTS data and EPA-supplied water analysis. Techniques for determining lake trophic levels using ERTS data are examined, and data obtained for 20 lakes are discussed.
Davis, Philip A.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This DS consists of the locally enhanced ALOS image mosaics for each of the 24 mineral project areas (referred to herein as areas of interest), whose locality names, locations, and main mineral occurrences are shown on the index map of Afghanistan (fig. 1). ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency, but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. PRISM image orthorectification for one-half of the target areas was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using SPARKLE logic, which is described in Davis (2006). Each of the four-band images within each resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a specified radius that was usually 500 m. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (either 41 or 42) and the WGS84 datum. Most final image mosaics were subdivided into overlapping tiles or quadrants because of the large size of the target areas. The image tiles (or quadrants) for each area of interest are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Approximately one-half of the study areas have at least one subarea designated for detailed field investigations; the subareas were extracted from the area's image mosaic and are provided as separate embedded geotiff images.
Automatic Feature Extraction System.
1982-12-01
exploitation. It was used for * processing of black and white and multispectral reconnaissance photography, side-looking synthetic aperture radar imagery...the image data and different software modules for image queing and formatting, the result of the input process will be images in standard AFES file...timely manner. The FFS configuration provides the environment necessary for integrated testing of image processing functions and design and
Oximetry using multispectral imaging: theory and application
NASA Astrophysics Data System (ADS)
MacKenzie, Lewis E.; Harvey, Andrew R.
2018-06-01
Multispectral imaging (MSI) is a technique for measurement of blood oxygen saturation in vivo that can be applied using various imaging modalities to provide new insights into physiology and disease development. This tutorial aims to provide a thorough introduction to the theory and application of MSI oximetry for researchers new to the field, whilst also providing detailed information for more experienced researchers. The optical theory underlying two-wavelength oximetry, three-wavelength oximetry, pulse oximetry, and multispectral oximetry algorithms are described in detail. The varied challenges of applying MSI oximetry to in vivo applications are outlined and discussed, covering: the optical properties of blood and tissue, optical paths in blood vessels, tissue auto-fluorescence, oxygen diffusion, and common oximetry artefacts. Essential image processing techniques for MSI are discussed, in particular, image acquisition, image registration strategies, and blood vessel line profile fitting. Calibration and validation strategies for MSI are discussed, including comparison techniques, physiological interventions, and phantoms. The optical principles and unique imaging capabilities of various cutting-edge MSI oximetry techniques are discussed, including photoacoustic imaging, spectroscopic optical coherence tomography, and snapshot MSI.
Davis, Philip A.; Cagney, Laura E.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Nalbandon mineral district, which has lead and zinc deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2007, 2008, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (41 for Nalbandon) and the WGS84 datum. The final image mosaics were subdivided into ten overlapping tiles or quadrants because of the large size of the target area. The ten image tiles (or quadrants) for the Nalbandon area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Nalbandon study area, two subareas were designated for detailed field investigations (that is, the Nalbandon District and Gharghananaw-Gawmazar subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Cagney, Laura E.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Zarkashan mineral district, which has copper and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Zarkashan) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the Zarkashan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Zarkashan study area, three subareas were designated for detailed field investigations (that is, the Mine Area, Bolo Gold Prospect, and Luman-Tamaki Gold Prospect subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kandahar mineral district, which has bauxite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2006,2007,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar- elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image- registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative- reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area- enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Kandahar) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Kandahar area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Kandahar study area, two subareas were designated for detailed field investigations (that is, the Obatu-Shela and Sekhab-Zamto Kalay subareas); these subareas were extracted from the area's image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Khanneshin mineral district, which has uranium, thorium, rare-earth-element, and apatite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Khanneshin) and the WGS84 datum. The final image mosaics were subdivided into nine overlapping tiles or quadrants because of the large size of the target area. The nine image tiles (or quadrants) for the Khanneshin area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Khanneshin study area, one subarea was designated for detailed field investigations (that is, the Khanneshin volcano subarea); this subarea was extracted from the area's image mosaic and is provided as separate embedded geotiff images.
Davis, Philip A.; Cagney, Laura E.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Panjsher Valley mineral district, which has emerald and silver-iron deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2009, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Panjsher Valley) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the Panjsher Valley area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Panjsher Valley study area, two subareas were designated for detailed field investigations (that is, the Emerald and Silver-Iron subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.
2014-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Farah mineral district, which has spectral reflectance anomalies indicative of copper, zinc, lead, silver, and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA, 2007, 2008, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Farah) and the WGS84 datum. The final image mosaics were subdivided into four overlapping tiles or quadrants because of the large size of the target area. The four image tiles (or quadrants) for the Farah area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Farah study area, five subareas were designated for detailed field investigations (that is, the FarahA through FarahE subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
2016-10-10
AFRL-RX-WP-JA-2017-0189 EXPERIMENTAL DEMONSTRATION OF ADAPTIVE INFRARED MULTISPECTRAL IMAGING USING PLASMONIC FILTER ARRAY...March 2016 – 23 May 2016 4. TITLE AND SUBTITLE EXPERIMENTAL DEMONSTRATION OF ADAPTIVE INFRARED MULTISPECTRAL IMAGING USING PLASMONIC FILTER ARRAY...experimental demonstration of adaptive multispectral imagery using fabricated plasmonic spectral filter arrays and proposed target detection scenarios
Systematic Processing of Clementine Data for Scientific Analyses
NASA Technical Reports Server (NTRS)
Mcewen, A. S.
1993-01-01
If fully successful, the Clementine mission will return about 3,000,000 lunar images and more than 5000 images of Geographos. Effective scientific analyses of such large datasets require systematic processing efforts. Concepts for two such efforts are described: glogal multispectral imaging of the moon; and videos of Geographos.
Wide field-of-view dual-band multispectral muzzle flash detection
NASA Astrophysics Data System (ADS)
Montoya, J.; Melchor, J.; Spiliotis, P.; Taplin, L.
2013-06-01
Sensor technologies are undergoing revolutionary advances, as seen in the rapid growth of multispectral methodologies. Increases in spatial, spectral, and temporal resolution, and in breadth of spectral coverage, render feasible sensors that function with unprecedented performance. A system was developed that addresses many of the key hardware requirements for a practical dual-band multispectral acquisition system, including wide field of view and spectral/temporal shift between dual bands. The system was designed using a novel dichroic beam splitter and dual band-pass filter configuration that creates two side-by-side images of a scene on a single sensor. A high-speed CMOS sensor was used to simultaneously capture data from the entire scene in both spectral bands using a short focal-length lens that provided a wide field-of-view. The beam-splitter components were arranged such that the two images were maintained in optical alignment and real-time intra-band processing could be carried out using only simple arithmetic on the image halves. An experiment related to limitations of the system to address multispectral detection requirements was performed. This characterized the system's low spectral variation across its wide field of view. This paper provides lessons learned on the general limitation of key hardware components required for multispectral muzzle flash detection, using the system as a hardware example combined with simulated multispectral muzzle flash and background signatures.
NASA Astrophysics Data System (ADS)
Ozolinsh, Maris; Fomins, Sergejs
2010-11-01
Multispectral color analysis was used for spectral scanning of Ishihara and Rabkin color deficiency test book images. It was done using tunable liquid-crystal LC filters built in the Nuance II analyzer. Multispectral analysis keeps both, information on spatial content of tests and on spectral content. Images were taken in the range of 420-720nm with a 10nm step. We calculated retina neural activity charts taking into account cone sensitivity functions, and processed charts in order to find the visibility of latent symbols in color deficiency plates using cross-correlation technique. In such way the quantitative measure is found for each of diagnostics plate for three different color deficiency carrier types - protanopes, deutanopes and tritanopes. Multispectral color analysis allows to determine the CIE xyz color coordinates of pseudoisochromatic plate design elements and to perform statistical analysis of these data to compare the color quality of available color deficiency test books.
Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kunduz mineral district, which has celestite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Kunduz) and the WGS84 datum. The final image mosaics were subdivided into five overlapping tiles or quadrants because of the large size of the target area. The five image tiles (or quadrants) for the Kunduz area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Tourmaline mineral district, which has tin deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Tourmaline) and the WGS84 datum. The final image mosaics were subdivided into four overlapping tiles or quadrants because of the large size of the target area. The four image tiles (or quadrants) for the Tourmaline area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Dudkash mineral district, which has industrial mineral deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Dudkash) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Dudkash area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Multispectral imaging with vertical silicon nanowires
Park, Hyunsung; Crozier, Kenneth B.
2013-01-01
Multispectral imaging is a powerful tool that extends the capabilities of the human eye. However, multispectral imaging systems generally are expensive and bulky, and multiple exposures are needed. Here, we report the demonstration of a compact multispectral imaging system that uses vertical silicon nanowires to realize a filter array. Multiple filter functions covering visible to near-infrared (NIR) wavelengths are simultaneously defined in a single lithography step using a single material (silicon). Nanowires are then etched and embedded into polydimethylsiloxane (PDMS), thereby realizing a device with eight filter functions. By attaching it to a monochrome silicon image sensor, we successfully realize an all-silicon multispectral imaging system. We demonstrate visible and NIR imaging. We show that the latter is highly sensitive to vegetation and furthermore enables imaging through objects opaque to the eye. PMID:23955156
Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.; Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Nuristan mineral district, which has gem, lithium, and cesium deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. All available panchromatic images for this area had significant cloud and snow cover that precluded their use for resolution enhancement of the multispectral image data. Each of the four-band images within the 10-m image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Nuristan) and the WGS84 datum. The final image mosaics for the Nuristan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
HPT: A High Spatial Resolution Multispectral Sensor for Microsatellite Remote Sensing
Takahashi, Yukihiro; Sakamoto, Yuji; Kuwahara, Toshinori
2018-01-01
Although nano/microsatellites have great potential as remote sensing platforms, the spatial and spectral resolutions of an optical payload instrument are limited. In this study, a high spatial resolution multispectral sensor, the High-Precision Telescope (HPT), was developed for the RISING-2 microsatellite. The HPT has four image sensors: three in the visible region of the spectrum used for the composition of true color images, and a fourth in the near-infrared region, which employs liquid crystal tunable filter (LCTF) technology for wavelength scanning. Band-to-band image registration methods have also been developed for the HPT and implemented in the image processing procedure. The processed images were compared with other satellite images, and proven to be useful in various remote sensing applications. Thus, LCTF technology can be considered an innovative tool that is suitable for future multi/hyperspectral remote sensing by nano/microsatellites. PMID:29463022
NASA Astrophysics Data System (ADS)
Jakovels, Dainis; Lihacova, Ilze; Kuzmina, Ilona; Spigulis, Janis
2013-11-01
Non-invasive and fast primary diagnostics of pigmented skin lesions is required due to frequent incidence of skin cancer - melanoma. Diagnostic potential of principal component analysis (PCA) for distant skin melanoma recognition is discussed. Processing of the measured clinical multi-spectral images (31 melanomas and 94 nonmalignant pigmented lesions) in the wavelength range of 450-950 nm by means of PCA resulted in 87 % sensitivity and 78 % specificity for separation between malignant melanomas and pigmented nevi.
High Throughput Multispectral Image Processing with Applications in Food Science.
Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John
2015-01-01
Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.
Reproducible high-resolution multispectral image acquisition in dermatology
NASA Astrophysics Data System (ADS)
Duliu, Alexandru; Gardiazabal, José; Lasser, Tobias; Navab, Nassir
2015-07-01
Multispectral image acquisitions are increasingly popular in dermatology, due to their improved spectral resolution which enables better tissue discrimination. Most applications however focus on restricted regions of interest, imaging only small lesions. In this work we present and discuss an imaging framework for high-resolution multispectral imaging on large regions of interest.
NASA Astrophysics Data System (ADS)
Sahoo, Sujit Kumar; Tang, Dongliang; Dang, Cuong
2018-02-01
Large field of view multispectral imaging through scattering medium is a fundamental quest in optics community. It has gained special attention from researchers in recent years for its wide range of potential applications. However, the main bottlenecks of the current imaging systems are the requirements on specific illumination, poor image quality and limited field of view. In this work, we demonstrated a single-shot high-resolution colour-imaging through scattering media using a monochromatic camera. This novel imaging technique is enabled by the spatial, spectral decorrelation property and the optical memory effect of the scattering media. Moreover the use of deconvolution image processing further annihilate above-mentioned drawbacks arise due iterative refocusing, scanning or phase retrieval procedures.
Classification by Using Multispectral Point Cloud Data
NASA Astrophysics Data System (ADS)
Liao, C. T.; Huang, H. H.
2012-07-01
Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.
Multispectral multisensor image fusion using wavelet transforms
Lemeshewsky, George P.
1999-01-01
Fusion techniques can be applied to multispectral and higher spatial resolution panchromatic images to create a composite image that is easier to interpret than the individual images. Wavelet transform-based multisensor, multiresolution fusion (a type of band sharpening) was applied to Landsat thematic mapper (TM) multispectral and coregistered higher resolution SPOT panchromatic images. The objective was to obtain increased spatial resolution, false color composite products to support the interpretation of land cover types wherein the spectral characteristics of the imagery are preserved to provide the spectral clues needed for interpretation. Since the fusion process should not introduce artifacts, a shift invariant implementation of the discrete wavelet transform (SIDWT) was used. These results were compared with those using the shift variant, discrete wavelet transform (DWT). Overall, the process includes a hue, saturation, and value color space transform to minimize color changes, and a reported point-wise maximum selection rule to combine transform coefficients. The performance of fusion based on the SIDWT and DWT was evaluated with a simulated TM 30-m spatial resolution test image and a higher resolution reference. Simulated imagery was made by blurring higher resolution color-infrared photography with the TM sensors' point spread function. The SIDWT based technique produced imagery with fewer artifacts and lower error between fused images and the full resolution reference. Image examples with TM and SPOT 10-m panchromatic illustrate the reduction in artifacts due to the SIDWT based fusion.
Time-resolved multispectral imaging of combustion reactions
NASA Astrophysics Data System (ADS)
Huot, Alexandrine; Gagnon, Marc-André; Jahjah, Karl-Alexandre; Tremblay, Pierre; Savary, Simon; Farley, Vincent; Lagueux, Philippe; Guyot, Éric; Chamberland, Martin; Marcotte, Frédérick
2015-10-01
Thermal infrared imaging is a field of science that evolves rapidly. Scientists have used for years the simplest tool: thermal broadband cameras. These allow to perform target characterization in both the longwave (LWIR) and midwave (MWIR) infrared spectral range. Infrared thermal imaging is used for a wide range of applications, especially in the combustion domain. For example, it can be used to follow combustion reactions, in order to characterize the injection and the ignition in a combustion chamber or even to observe gases produced by a flare or smokestack. Most combustion gases, such as carbon dioxide (CO2), selectively absorb/emit infrared radiation at discrete energies, i.e. over a very narrow spectral range. Therefore, temperatures derived from broadband imaging are not reliable without prior knowledge of spectral emissivity. This information is not directly available from broadband images. However, spectral information is available using spectral filters. In this work, combustion analysis was carried out using a Telops MS-IR MW camera, which allows multispectral imaging at a high frame rate. A motorized filter wheel allowing synchronized acquisitions on eight (8) different channels was used to provide time-resolved multispectral imaging of combustion products of a candle in which black powder has been burnt to create a burst. It was then possible to estimate the temperature by modeling spectral profiles derived from information obtained with the different spectral filters. Comparison with temperatures obtained using conventional broadband imaging illustrates the benefits of time-resolved multispectral imaging for the characterization of combustion processes.
Time-resolved multispectral imaging of combustion reaction
NASA Astrophysics Data System (ADS)
Huot, Alexandrine; Gagnon, Marc-André; Jahjah, Karl-Alexandre; Tremblay, Pierre; Savary, Simon; Farley, Vincent; Lagueux, Philippe; Guyot, Éric; Chamberland, Martin; Marcotte, Fréderick
2015-05-01
Thermal infrared imaging is a field of science that evolves rapidly. Scientists have used for years the simplest tool: thermal broadband cameras. This allows to perform target characterization in both the longwave (LWIR) and midwave (MWIR) infrared spectral range. Infrared thermal imaging is used for a wide range of applications, especially in the combustion domain. For example, it can be used to follow combustion reactions, in order to characterize the injection and the ignition in a combustion chamber or even to observe gases produced by a flare or smokestack. Most combustion gases such as carbon dioxide (CO2) selectively absorb/emit infrared radiation at discrete energies, i.e. over a very narrow spectral range. Therefore, temperatures derived from broadband imaging are not reliable without prior knowledge about spectral emissivity. This information is not directly available from broadband images. However, spectral information is available using spectral filters. In this work, combustion analysis was carried out using Telops MS-IR MW camera which allows multispectral imaging at a high frame rate. A motorized filter wheel allowing synchronized acquisitions on eight (8) different channels was used to provide time-resolved multispectral imaging of combustion products of a candle in which black powder has been burnt to create a burst. It was then possible to estimate the temperature by modeling spectral profile derived from information obtained with the different spectral filters. Comparison with temperatures obtained using conventional broadband imaging illustrates the benefits of time-resolved multispectral imaging for the characterization of combustion processes.
Urban land use monitoring from computer-implemented processing of airborne multispectral data
NASA Technical Reports Server (NTRS)
Todd, W. J.; Mausel, P. W.; Baumgardner, M. F.
1976-01-01
Machine processing techniques were applied to multispectral data obtained from airborne scanners at an elevation of 600 meters over central Indianapolis in August, 1972. Computer analysis of these spectral data indicate that roads (two types), roof tops (three types), dense grass (two types), sparse grass (two types), trees, bare soil, and water (two types) can be accurately identified. Using computers, it is possible to determine land uses from analysis of type, size, shape, and spatial associations of earth surface images identified from multispectral data. Land use data developed through machine processing techniques can be programmed to monitor land use changes, simulate land use conditions, and provide impact statistics that are required to analyze stresses placed on spatial systems.
NASA Astrophysics Data System (ADS)
Dong, Yang; He, Honghui; He, Chao; Ma, Hui
2016-10-01
Polarized light is sensitive to the microstructures of biological tissues and can be used to detect physiological changes. Meanwhile, spectral features of the scattered light can also provide abundant microstructural information of tissues. In this paper, we take the backscattering polarization Mueller matrix images of bovine skeletal muscle tissues during the 24-hour experimental time, and analyze their multispectral behavior using quantitative Mueller matrix parameters. In the processes of rigor mortis and proteolysis of muscle samples, multispectral frequency distribution histograms (FDHs) of the Mueller matrix elements can reveal rich qualitative structural information. In addition, we analyze the temporal variations of the sample using the multispectral Mueller matrix transformation (MMT) parameters. The experimental results indicate that the different stages of rigor mortis and proteolysis for bovine skeletal muscle samples can be judged by these MMT parameters. The results presented in this work show that combining with the multispectral technique, the FDHs and MMT parameters can characterize the microstructural variation features of skeletal muscle tissues. The techniques have the potential to be used as tools for quantitative assessment of meat qualities in food industry.
The Multispectral Imaging Science Working Group. Volume 2: Working group reports
NASA Technical Reports Server (NTRS)
Cox, S. C. (Editor)
1982-01-01
Summaries of the various multispectral imaging science working groups are presented. Current knowledge of the spectral and spatial characteristics of the Earth's surface is outlined and the present and future capabilities of multispectral imaging systems are discussed.
Higher resolution satellite remote sensing and the impact on image mapping
Watkins, Allen H.; Thormodsgard, June M.
1987-01-01
Recent advances in spatial, spectral, and temporal resolution of civil land remote sensing satellite data are presenting new opportunities for image mapping applications. The U.S. Geological Survey's experimental satellite image mapping program is evolving toward larger scale image map products with increased information content as a result of improved image processing techniques and increased resolution. Thematic mapper data are being used to produce experimental image maps at 1:100,000 scale that meet established U.S. and European map accuracy standards. Availability of high quality, cloud-free, 30-meter ground resolution multispectral data from the Landsat thematic mapper sensor, along with 10-meter ground resolution panchromatic and 20-meter ground resolution multispectral data from the recently launched French SPOT satellite, present new cartographic and image processing challenges.The need to fully exploit these higher resolution data increases the complexity of processing the images into large-scale image maps. The removal of radiometric artifacts and noise prior to geometric correction can be accomplished by using a variety of image processing filters and transforms. Sensor modeling and image restoration techniques allow maximum retention of spatial and radiometric information. An optimum combination of spectral information and spatial resolution can be obtained by merging different sensor types. These processing techniques are discussed and examples are presented.
Multispectral and geomorphic studies of processed Voyager 2 images of Europa
NASA Technical Reports Server (NTRS)
Meier, T. A.
1984-01-01
High resolution images of Europa taken by the Voyager 2 spacecraft were used to study a portion of Europa's dark lineations and the major white line feature Agenor Linea. Initial image processing of images 1195J2-001 (violet filter), 1198J2-001 (blue filter), 1201J2-001 (orange filter), and 1204J2-001 (ultraviolet filter) was performed at the U.S.G.S. Branch of Astrogeology in Flagstaff, Arizona. Processing was completed through the stages of image registration and color ratio image construction. Pixel printouts were used in a new technique of linear feature profiling to compensate for image misregistration through the mapping of features on the printouts. In all, 193 dark lineation segments were mapped and profiled. The more accurate multispectral data derived by this method was plotted using a new application of the ternary diagram, with orange, blue, and violet relative spectral reflectances serving as end members. Statistical techniques were then applied to the ternary diagram plots. The image products generated at LPI were used mainly to cross-check and verify the results of the ternary diagram analysis.
NASA Astrophysics Data System (ADS)
Yang, Chun-Chieh; Kim, Moon S.; Chuang, Yung-Kun; Lee, Hoyoung
2013-05-01
This paper reports the development of a multispectral algorithm, using the line-scan hyperspectral imaging system, to detect fecal contamination on leafy greens. Fresh bovine feces were applied to the surfaces of washed loose baby spinach leaves. A hyperspectral line-scan imaging system was used to acquire hyperspectral fluorescence images of the contaminated leaves. Hyperspectral image analysis resulted in the selection of the 666 nm and 688 nm wavebands for a multispectral algorithm to rapidly detect feces on leafy greens, by use of the ratio of fluorescence intensities measured at those two wavebands (666 nm over 688 nm). The algorithm successfully distinguished most of the lowly diluted fecal spots (0.05 g feces/ml water and 0.025 g feces/ml water) and some of the highly diluted spots (0.0125 g feces/ml water and 0.00625 g feces/ml water) from the clean spinach leaves. The results showed the potential of the multispectral algorithm with line-scan imaging system for application to automated food processing lines for food safety inspection of leafy green vegetables.
Wang, Guizhou; Liu, Jianbo; He, Guojin
2013-01-01
This paper presents a new classification method for high-spatial-resolution remote sensing images based on a strategic mechanism of spatial mapping and reclassification. The proposed method includes four steps. First, the multispectral image is classified by a traditional pixel-based classification method (support vector machine). Second, the panchromatic image is subdivided by watershed segmentation. Third, the pixel-based multispectral image classification result is mapped to the panchromatic segmentation result based on a spatial mapping mechanism and the area dominant principle. During the mapping process, an area proportion threshold is set, and the regional property is defined as unclassified if the maximum area proportion does not surpass the threshold. Finally, unclassified regions are reclassified based on spectral information using the minimum distance to mean algorithm. Experimental results show that the classification method for high-spatial-resolution remote sensing images based on the spatial mapping mechanism and reclassification strategy can make use of both panchromatic and multispectral information, integrate the pixel- and object-based classification methods, and improve classification accuracy. PMID:24453808
Vision communications based on LED array and imaging sensor
NASA Astrophysics Data System (ADS)
Yoo, Jong-Ho; Jung, Sung-Yoon
2012-11-01
In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.
Smartphone-based multispectral imaging: system development and potential for mobile skin diagnosis.
Kim, Sewoong; Cho, Dongrae; Kim, Jihun; Kim, Manjae; Youn, Sangyeon; Jang, Jae Eun; Je, Minkyu; Lee, Dong Hun; Lee, Boreom; Farkas, Daniel L; Hwang, Jae Youn
2016-12-01
We investigate the potential of mobile smartphone-based multispectral imaging for the quantitative diagnosis and management of skin lesions. Recently, various mobile devices such as a smartphone have emerged as healthcare tools. They have been applied for the early diagnosis of nonmalignant and malignant skin diseases. Particularly, when they are combined with an advanced optical imaging technique such as multispectral imaging and analysis, it would be beneficial for the early diagnosis of such skin diseases and for further quantitative prognosis monitoring after treatment at home. Thus, we demonstrate here the development of a smartphone-based multispectral imaging system with high portability and its potential for mobile skin diagnosis. The results suggest that smartphone-based multispectral imaging and analysis has great potential as a healthcare tool for quantitative mobile skin diagnosis.
Fuzzy Markov random fields versus chains for multispectral image segmentation.
Salzenstein, Fabien; Collet, Christophe
2006-11-01
This paper deals with a comparison of recent statistical models based on fuzzy Markov random fields and chains for multispectral image segmentation. The fuzzy scheme takes into account discrete and continuous classes which model the imprecision of the hidden data. In this framework, we assume the dependence between bands and we express the general model for the covariance matrix. A fuzzy Markov chain model is developed in an unsupervised way. This method is compared with the fuzzy Markovian field model previously proposed by one of the authors. The segmentation task is processed with Bayesian tools, such as the well-known MPM (Mode of Posterior Marginals) criterion. Our goal is to compare the robustness and rapidity for both methods (fuzzy Markov fields versus fuzzy Markov chains). Indeed, such fuzzy-based procedures seem to be a good answer, e.g., for astronomical observations when the patterns present diffuse structures. Moreover, these approaches allow us to process missing data in one or several spectral bands which correspond to specific situations in astronomy. To validate both models, we perform and compare the segmentation on synthetic images and raw multispectral astronomical data.
Galileo multispectral imaging of the north polar and eastern limb regions of the moon
Belton, M.J.S.; Greeley, R.; Greenberg, R.; McEwen, A.; Klaasen, K.P.; Head, J. W.; Pieters, C.; Neukum, G.; Chapman, C.R.; Geissler, P.; Heffernan, C.; Breneman, H.; Anger, C.; Carr, M.H.; Davies, M.E.; Fanale, F.P.; Gierasch, P.J.; Ingersoll, A.P.; Johnson, T.V.; Pilcher, C.B.; Thompson, W.R.; Veverka, J.; Sagan, C.
1994-01-01
Multispectral images obtained during the Galileo probe's second encounter with the moon reveal the compositional nature of the north polar regions and the northeastern limb. Mare deposits in these regions are found to be primarily low to medium titanium lavas and, as on the western limb, show only slight spectral heterogeneity. The northern light plains are found to have the spectral characteristics of highlands materials, show little evidence for the presence of cryptomaria, and were most likely emplaced by impact processes regardless of their age.Multispectral images obtained during the Galileo probe's second encounter with the moon reveal the compositional nature of the north polar regions and the northeastern limb. Mare deposits in these regions are found to be primarily low to medium titanium lavas and, as on the western limb, show only slight spectral heterogeneity. The northern light plains are found to have the spectral characteristics of highlands materials, show little evidence for the presence of cryptomaria, and were most likely emplaced by impact processes regardless of their age.
Multispectral computational ghost imaging with multiplexed illumination
NASA Astrophysics Data System (ADS)
Huang, Jian; Shi, Dongfeng
2017-07-01
Computational ghost imaging has attracted wide attention from researchers in many fields over the last two decades. Multispectral imaging as one application of computational ghost imaging possesses spatial and spectral resolving abilities, and is very useful for surveying scenes and extracting detailed information. Existing multispectral imagers mostly utilize narrow band filters or dispersive optical devices to separate light of different wavelengths, and then use multiple bucket detectors or an array detector to record them separately. Here, we propose a novel multispectral ghost imaging method that uses one single bucket detector with multiplexed illumination to produce a colored image. The multiplexed illumination patterns are produced by three binary encoded matrices (corresponding to the red, green and blue colored information, respectively) and random patterns. The results of the simulation and experiment have verified that our method can be effective in recovering the colored object. Multispectral images are produced simultaneously by one single-pixel detector, which significantly reduces the amount of data acquisition.
Study on multispectral imaging detection and recognition
NASA Astrophysics Data System (ADS)
Jun, Wang; Na, Ding; Gao, Jiaobo; Yu, Hu; Jun, Wu; Li, Junna; Zheng, Yawei; Fei, Gao; Sun, Kefeng
2009-07-01
Multispectral imaging detecting technology use target radiation character in spectral spatial distribution and relation between spectral and image to detect target and remote sensing measure. Its speciality is multi channel, narrow bandwidth, large amount of information, high accuracy. The ability of detecting target in environment of clutter, camouflage, concealment and beguilement is improved. At present, spectral imaging technology in the range of multispectral and hyperspectral develop greatly. The multispectral imaging equipment of unmanned aerial vehicle can be used in mine detection, information, surveillance and reconnaissance. Spectral imaging spectrometer operating in MWIR and LWIR has already been applied in the field of remote sensing and military in the advanced country. The paper presents the technology of multispectral imaging. It can enhance the reflectance, scatter and radiation character of the artificial targets among nature background. The targets among complex background and camouflage/stealth targets can be effectively identified. The experiment results and the data of spectral imaging is obtained.
Multispectral Imaging for Determination of Astaxanthin Concentration in Salmonids
Dissing, Bjørn S.; Nielsen, Michael E.; Ersbøll, Bjarne K.; Frosch, Stina
2011-01-01
Multispectral imaging has been evaluated for characterization of the concentration of a specific cartenoid pigment; astaxanthin. 59 fillets of rainbow trout, Oncorhynchus mykiss, were filleted and imaged using a rapid multispectral imaging device for quantitative analysis. The multispectral imaging device captures reflection properties in 19 distinct wavelength bands, prior to determination of the true concentration of astaxanthin. The samples ranged from 0.20 to 4.34 g per g fish. A PLSR model was calibrated to predict astaxanthin concentration from novel images, and showed good results with a RMSEP of 0.27. For comparison a similar model were built for normal color images, which yielded a RMSEP of 0.45. The acquisition speed of the multispectral imaging system and the accuracy of the PLSR model obtained suggest this method as a promising technique for rapid in-line estimation of astaxanthin concentration in rainbow trout fillets. PMID:21573000
NASA Technical Reports Server (NTRS)
1982-01-01
Model II Multispectral Camera is an advanced aerial camera that provides optimum enhancement of a scene by recording spectral signatures of ground objects only in narrow, preselected bands of the electromagnetic spectrum. Its photos have applications in such areas as agriculture, forestry, water pollution investigations, soil analysis, geologic exploration, water depth studies and camouflage detection. The target scene is simultaneously photographed in four separate spectral bands. Using a multispectral viewer, such as their Model 75 Spectral Data creates a color image from the black and white positives taken by the camera. With this optical image analysis unit, all four bands are superimposed in accurate registration and illuminated with combinations of blue green, red, and white light. Best color combination for displaying the target object is selected and printed. Spectral Data Corporation produces several types of remote sensing equipment and also provides aerial survey, image processing and analysis and number of other remote sensing services.
Implementation of Multispectral Image Classification on a Remote Adaptive Computer
NASA Technical Reports Server (NTRS)
Figueiredo, Marco A.; Gloster, Clay S.; Stephens, Mark; Graves, Corey A.; Nakkar, Mouna
1999-01-01
As the demand for higher performance computers for the processing of remote sensing science algorithms increases, the need to investigate new computing paradigms its justified. Field Programmable Gate Arrays enable the implementation of algorithms at the hardware gate level, leading to orders of m a,gnitude performance increase over microprocessor based systems. The automatic classification of spaceborne multispectral images is an example of a computation intensive application, that, can benefit from implementation on an FPGA - based custom computing machine (adaptive or reconfigurable computer). A probabilistic neural network is used here to classify pixels of of a multispectral LANDSAT-2 image. The implementation described utilizes Java client/server application programs to access the adaptive computer from a remote site. Results verify that a remote hardware version of the algorithm (implemented on an adaptive computer) is significantly faster than a local software version of the same algorithm implemented on a typical general - purpose computer).
NASA Astrophysics Data System (ADS)
Matikainen, Leena; Karila, Kirsi; Hyyppä, Juha; Litkey, Paula; Puttonen, Eetu; Ahokas, Eero
2017-06-01
During the last 20 years, airborne laser scanning (ALS), often combined with passive multispectral information from aerial images, has shown its high feasibility for automated mapping processes. The main benefits have been achieved in the mapping of elevated objects such as buildings and trees. Recently, the first multispectral airborne laser scanners have been launched, and active multispectral information is for the first time available for 3D ALS point clouds from a single sensor. This article discusses the potential of this new technology in map updating, especially in automated object-based land cover classification and change detection in a suburban area. For our study, Optech Titan multispectral ALS data over a suburban area in Finland were acquired. Results from an object-based random forests analysis suggest that the multispectral ALS data are very useful for land cover classification, considering both elevated classes and ground-level classes. The overall accuracy of the land cover classification results with six classes was 96% compared with validation points. The classes under study included building, tree, asphalt, gravel, rocky area and low vegetation. Compared to classification of single-channel data, the main improvements were achieved for ground-level classes. According to feature importance analyses, multispectral intensity features based on several channels were more useful than those based on one channel. Automatic change detection for buildings and roads was also demonstrated by utilising the new multispectral ALS data in combination with old map vectors. In change detection of buildings, an old digital surface model (DSM) based on single-channel ALS data was also used. Overall, our analyses suggest that the new data have high potential for further increasing the automation level in mapping. Unlike passive aerial imaging commonly used in mapping, the multispectral ALS technology is independent of external illumination conditions, and there are no shadows on intensity images produced from the data. These are significant advantages in developing automated classification and change detection procedures.
Martínez-Domingo, Miguel Ángel; Valero, Eva M; Hernández-Andrés, Javier; Tominaga, Shoji; Horiuchi, Takahiko; Hirai, Keita
2017-11-27
We propose a method for the capture of high dynamic range (HDR), multispectral (MS), polarimetric (Pol) images of indoor scenes using a liquid crystal tunable filter (LCTF). We have included the adaptive exposure estimation (AEE) method to fully automatize the capturing process. We also propose a pre-processing method which can be applied for the registration of HDR images after they are already built as the result of combining different low dynamic range (LDR) images. This method is applied to ensure a correct alignment of the different polarization HDR images for each spectral band. We have focused our efforts in two main applications: object segmentation and classification into metal and dielectric classes. We have simplified the segmentation using mean shift combined with cluster averaging and region merging techniques. We compare the performance of our segmentation with that of Ncut and Watershed methods. For the classification task, we propose to use information not only in the highlight regions but also in their surrounding area, extracted from the degree of linear polarization (DoLP) maps. We present experimental results which proof that the proposed image processing pipeline outperforms previous techniques developed specifically for MSHDRPol image cubes.
Bautista, Pinky A; Yagi, Yukako
2011-01-01
In this paper we introduced a digital staining method for histopathology images captured with an n-band multispectral camera. The method consisted of two major processes: enhancement of the original spectral transmittance and the transformation of the enhanced transmittance to its target spectral configuration. Enhancement is accomplished by shifting the original transmittance with the scaled difference between the original transmittance and the transmittance estimated with m dominant principal component (PC) vectors;the m-PC vectors were determined from the transmittance samples of the background image. Transformation of the enhanced transmittance to the target spectral configuration was done using an nxn transformation matrix, which was derived by applying a least square method to the enhanced and target spectral training data samples of the different tissue components. Experimental results on the digital conversion of a hematoxylin and eosin (H&E) stained multispectral image to its Masson's trichrome stained (MT) equivalent shows the viability of the method.
Nondestructive prediction of pork freshness parameters using multispectral scattering images
NASA Astrophysics Data System (ADS)
Tang, Xiuying; Li, Cuiling; Peng, Yankun; Chao, Kuanglin; Wang, Mingwu
2012-05-01
Optical technology is an important and immerging technology for non-destructive and rapid detection of pork freshness. This paper studied on the possibility of using multispectral imaging technique and scattering characteristics to predict the freshness parameters of pork meat. The pork freshness parameters selected for prediction included total volatile basic nitrogen (TVB-N), color parameters (L *, a *, b *), and pH value. Multispectral scattering images were obtained from pork sample surface by a multispectral imaging system developed by ourselves; they were acquired at the selected narrow wavebands whose center wavelengths were 517,550, 560, 580, 600, 760, 810 and 910nm. In order to extract scattering characteristics from multispectral images at multiple wavelengths, a Lorentzian distribution (LD) function with four parameters (a: scattering asymptotic value; b: scattering peak; c: scattering width; d: scattering slope) was used to fit the scattering curves at the selected wavelengths. The results show that the multispectral imaging technique combined with scattering characteristics is promising for predicting the freshness parameters of pork meat.
NASA Technical Reports Server (NTRS)
Andersen, K. E.
1982-01-01
The format of high density tapes which contain partially processed LANDSAT 4 and LANDSAT D prime MSS image data is defined. This format is based on and is compatible with the existing format for partially processed LANDSAT 3 MSS image data HDTs.
Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Dusar-Shaida mineral district, which has copper and tin deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’ picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’ local zone (41 for Dusar-Shaida) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Dusar-Shaida area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Dusar-Shaida study area, three subareas were designated for detailed field investigations (that is, the Dahana-Misgaran, Kaftar VMS, and Shaida subareas); these subareas were extracted from the area’ image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kundalyan mineral district, which has porphyry copper and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Kundalyan) and the WGS84 datum. The final image mosaics were subdivided into five overlapping tiles or quadrants because of the large size of the target area. The five image tiles (or quadrants) for the Kundalyan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Kundalyan study area, three subareas were designated for detailed field investigations (that is, the Baghawan-Garangh, Charsu-Ghumbad, and Kunag Skarn subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Herat mineral district, which has barium and limestone deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 1,000-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Herat) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Herat area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Herat study area, one subarea was designated for detailed field investigations (that is, the Barium-Limestone subarea); this subarea was extracted from the area's image mosaic and is provided as separate embedded geotiff images.
Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Badakhshan mineral district, which has gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2007,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Badakhshan) and the WGS84 datum. The final image mosaics were subdivided into six overlapping tiles or quadrants because of the large size of the target area. The six image tiles (or quadrants) for the Badakhshan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Badakhshan study area, three subareas were designated for detailed field investigations (that is, the Bharak, Fayz-Abad, and Ragh subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kharnak-Kanjar mineral district, which has mercury deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 1,000-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Kharnak-Kanjar) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Kharnak-Kanjar area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Kharnak-Kanjar study area, three subareas were designated for detailed field investigations (that is, the Koh-e-Katif Passaband, Panjshah-Mullayan, and Sahebdad-Khanjar subareas); these subareas were extracted from the area's image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Haji-Gak mineral district, which has iron ore deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2006,2007), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then co-registered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image-coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Haji-Gak) and the WGS84 datum. The final image mosaics were subdivided into three overlapping tiles or quadrants because of the large size of the target area. The three image tiles (or quadrants) for the Haji-Gak area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Haji-Gak study area, three subareas were designated for detailed field investigations (that is, the Haji-Gak Prospect, Farenjal, and NE Haji-Gak subareas); these subareas were extracted from the area's image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Aynak mineral district, which has copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2008,2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Aynak) and the WGS84 datum. The final image mosaics were subdivided into four overlapping tiles or quadrants because of the large size of the target area. The four image tiles (or quadrants) for the Aynak area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Aynak study area, five subareas were designated for detailed field investigations (that is, the Bakhel-Charwaz, Kelaghey-Kakhay, Kharuti-Dawrankhel, Logar Valley, and Yagh-Darra/Gul-Darra subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.; Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ghunday-Achin mineral district, which has magnesite and talc deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Ghunday-Achin) and the WGS84 datum. The final image mosaics were subdivided into six overlapping tiles or quadrants because of the large size of the target area. The six image tiles (or quadrants) for the Ghunday-Achin area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Ghunday-Achin study area, two subareas were designated for detailed field investigations (that is, the Achin-Magnesite and Ghunday-Mamahel subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
MOVING BEYOND COLOR: THE CASE FOR MULTISPECTRAL IMAGING IN BRIGHTFIELD PATHOLOGY.
Cukierski, William J; Qi, Xin; Foran, David J
2009-01-01
A multispectral camera is capable of imaging a histologic slide at narrow bandwidths over the range of the visible spectrum. While several uses for multispectral imaging (MSI) have been demonstrated in pathology [1, 2], there is no unified consensus over when and how MSI might benefit automated analysis [3, 4]. In this work, we use a linear-algebra framework to investigate the relationship between the spectral image and its standard-image counterpart. The multispectral "cube" is treated as an extension of a traditional image in a high-dimensional color space. The concept of metamers is introduced and used to derive regions of the visible spectrum where MSI may provide an advantage. Furthermore, histological stains which are amenable to analysis by MSI are reported. We show the Commission internationale de l'éclairage (CIE) 1931 transformation from spectrum to color is non-neighborhood preserving. Empirical results are demonstrated on multispectral images of peripheral blood smears.
Smartphone-based multispectral imaging: system development and potential for mobile skin diagnosis
Kim, Sewoong; Cho, Dongrae; Kim, Jihun; Kim, Manjae; Youn, Sangyeon; Jang, Jae Eun; Je, Minkyu; Lee, Dong Hun; Lee, Boreom; Farkas, Daniel L.; Hwang, Jae Youn
2016-01-01
We investigate the potential of mobile smartphone-based multispectral imaging for the quantitative diagnosis and management of skin lesions. Recently, various mobile devices such as a smartphone have emerged as healthcare tools. They have been applied for the early diagnosis of nonmalignant and malignant skin diseases. Particularly, when they are combined with an advanced optical imaging technique such as multispectral imaging and analysis, it would be beneficial for the early diagnosis of such skin diseases and for further quantitative prognosis monitoring after treatment at home. Thus, we demonstrate here the development of a smartphone-based multispectral imaging system with high portability and its potential for mobile skin diagnosis. The results suggest that smartphone-based multispectral imaging and analysis has great potential as a healthcare tool for quantitative mobile skin diagnosis. PMID:28018743
Multispectral imaging system for contaminant detection
NASA Technical Reports Server (NTRS)
Poole, Gavin H. (Inventor)
2003-01-01
An automated inspection system for detecting digestive contaminants on food items as they are being processed for consumption includes a conveyor for transporting the food items, a light sealed enclosure which surrounds a portion of the conveyor, with a light source and a multispectral or hyperspectral digital imaging camera disposed within the enclosure. Operation of the conveyor, light source and camera are controlled by a central computer unit. Light reflected by the food items within the enclosure is detected in predetermined wavelength bands, and detected intensity values are analyzed to detect the presence of digestive contamination.
Automated road network extraction from high spatial resolution multi-spectral imagery
NASA Astrophysics Data System (ADS)
Zhang, Qiaoping
For the last three decades, the Geomatics Engineering and Computer Science communities have considered automated road network extraction from remotely-sensed imagery to be a challenging and important research topic. The main objective of this research is to investigate the theory and methodology of automated feature extraction for image-based road database creation, refinement or updating, and to develop a series of algorithms for road network extraction from high resolution multi-spectral imagery. The proposed framework for road network extraction from multi-spectral imagery begins with an image segmentation using the k-means algorithm. This step mainly concerns the exploitation of the spectral information for feature extraction. The road cluster is automatically identified using a fuzzy classifier based on a set of predefined road surface membership functions. These membership functions are established based on the general spectral signature of road pavement materials and the corresponding normalized digital numbers on each multi-spectral band. Shape descriptors of the Angular Texture Signature are defined and used to reduce the misclassifications between roads and other spectrally similar objects (e.g., crop fields, parking lots, and buildings). An iterative and localized Radon transform is developed for the extraction of road centerlines from the classified images. The purpose of the transform is to accurately and completely detect the road centerlines. It is able to find short, long, and even curvilinear lines. The input image is partitioned into a set of subset images called road component images. An iterative Radon transform is locally applied to each road component image. At each iteration, road centerline segments are detected based on an accurate estimation of the line parameters and line widths. Three localization approaches are implemented and compared using qualitative and quantitative methods. Finally, the road centerline segments are grouped into a road network. The extracted road network is evaluated against a reference dataset using a line segment matching algorithm. The entire process is unsupervised and fully automated. Based on extensive experimentation on a variety of remotely-sensed multi-spectral images, the proposed methodology achieves a moderate success in automating road network extraction from high spatial resolution multi-spectral imagery.
Eliminate background interference from latent fingerprints using ultraviolet multispectral imaging
NASA Astrophysics Data System (ADS)
Huang, Wei; Xu, Xiaojing; Wang, Guiqiang
2014-02-01
Fingerprints are the most important evidence in crime scene. The technology of developing latent fingerprints is one of the hottest research areas in forensic science. Recently, multispectral imaging which has shown great capability in fingerprints development, questioned document detection and trace evidence examination is used in detecting material evidence. This paper studied how to eliminate background interference from non-porous and porous surface latent fingerprints by rotating filter wheel ultraviolet multispectral imaging. The results approved that background interference could be removed clearly from latent fingerprints by using multispectral imaging in ultraviolet bandwidth.
Li, Hanlun; Zhang, Aiwu; Hu, Shaoxing
2015-01-01
This paper describes an airborne high resolution four-camera multispectral system which mainly consists of four identical monochrome cameras equipped with four interchangeable bandpass filters. For this multispectral system, an automatic multispectral data composing method was proposed. The homography registration model was chosen, and the scale-invariant feature transform (SIFT) and random sample consensus (RANSAC) were used to generate matching points. For the difficult registration problem between visible band images and near-infrared band images in cases lacking manmade objects, we presented an effective method based on the structural characteristics of the system. Experiments show that our method can acquire high quality multispectral images and the band-to-band alignment error of the composed multiple spectral images is less than 2.5 pixels. PMID:26205264
M-DAS: System for multispectral data analysis. [in Saginaw Bay, Michigan
NASA Technical Reports Server (NTRS)
Johnson, R. H.
1975-01-01
M-DAS is a ground data processing system designed for analysis of multispectral data. M-DAS operates on multispectral data from LANDSAT, S-192, M2S and other sources in CCT form. Interactive training by operator-investigators using a variable cursor on a color display was used to derive optimum processing coefficients and data on cluster separability. An advanced multivariate normal-maximum likelihood processing algorithm was used to produce output in various formats: color-coded film images, geometrically corrected map overlays, moving displays of scene sections, coverage tabulations and categorized CCTs. The analysis procedure for M-DAS involves three phases: (1) screening and training, (2) analysis of training data to compute performance predictions and processing coefficients, and (3) processing of multichannel input data into categorized results. Typical M-DAS applications involve iteration between each of these phases. A series of photographs of the M-DAS display are used to illustrate M-DAS operation.
NASA Astrophysics Data System (ADS)
Noordmans, Herke Jan; de Roode, Rowland; Verdaasdonk, Rudolf
2007-03-01
Multi-spectral images of human tissue taken in-vivo often contain image alignment problems as patients have difficulty in retaining their posture during the acquisition time of 20 seconds. Previously, it has been attempted to correct motion errors with image registration software developed for MR or CT data but these algorithms have been proven to be too slow and erroneous for practical use with multi-spectral images. A new software package has been developed which allows the user to play a decisive role in the registration process as the user can monitor the progress of the registration continuously and force it in the right direction when it starts to fail. The software efficiently exploits videocard hardware to gain speed and to provide a perfect subvoxel correspondence between registration field and display. An 8 bit graphic card was used to efficiently register and resample 12 bit images using the hardware interpolation modes present on the graphic card. To show the feasibility of this new registration process, the software was applied in clinical practice evaluating the dosimetry for psoriasis and KTP laser treatment. The microscopic differences between images of normal skin and skin exposed to UV light proved that an affine registration step including zooming and slanting is critical for a subsequent elastic match to have success. The combination of user interactive registration software with optimal addressing the potentials of PC video card hardware greatly improves the speed of multi spectral image registration.
NASA Astrophysics Data System (ADS)
Noordmans, Herke J.; de Roode, Rowland; Verdaasdonk, Rudolf
2007-02-01
Multi-spectral images of human tissue taken in-vivo often contain image alignment problems as patients have difficulty in retaining their posture during the acquisition time of 20 seconds. Previously, it has been attempted to correct motion errors with image registration software developed for MR or CT data but these algorithms have been proven to be too slow and erroneous for practical use with multi-spectral images. A new software package has been developed which allows the user to play a decisive role in the registration process as the user can monitor the progress of the registration continuously and force it in the right direction when it starts to fail. The software efficiently exploits videocard hardware to gain speed and to provide a perfect subvoxel correspondence between registration field and display. An 8 bit graphic card was used to efficiently register and resample 12 bit images using the hardware interpolation modes present on the graphic card. To show the feasibility of this new registration process, the software was applied in clinical practice evaluating the dosimetry for psoriasis and KTP laser treatment. The microscopic differences between images of normal skin and skin exposed to UV light proved that an affine registration step including zooming and slanting is critical for a subsequent elastic match to have success. The combination of user interactive registration software with optimal addressing the potentials of PC video card hardware greatly improves the speed of multi spectral image registration.
Image processing of underwater multispectral imagery
Zawada, D. G.
2003-01-01
Capturing in situ fluorescence images of marine organisms presents many technical challenges. The effects of the medium, as well as the particles and organisms within it, are intermixed with the desired signal. Methods for extracting and preparing the imagery for analysis are discussed in reference to a novel underwater imaging system called the low-light-level underwater multispectral imaging system (LUMIS). The instrument supports both uni- and multispectral collections, each of which is discussed in the context of an experimental application. In unispectral mode, LUMIS was used to investigate the spatial distribution of phytoplankton. A thin sheet of laser light (532 nm) induced chlorophyll fluorescence in the phytoplankton, which was recorded by LUMIS. Inhomogeneities in the light sheet led to the development of a beam-pattern-correction algorithm. Separating individual phytoplankton cells from a weak background fluorescence field required a two-step procedure consisting of edge detection followed by a series of binary morphological operations. In multispectral mode, LUMIS was used to investigate the bio-assay potential of fluorescent pigments in corals. Problems with the commercial optical-splitting device produced nonlinear distortions in the imagery. A tessellation algorithm, including an automated tie-point-selection procedure, was developed to correct the distortions. Only pixels corresponding to coral polyps were of interest for further analysis. Extraction of these pixels was performed by a dynamic global-thresholding algorithm.
Multispectral imaging approach for simplified non-invasive in-vivo evaluation of gingival erythema
NASA Astrophysics Data System (ADS)
Eckhard, Timo; Valero, Eva M.; Nieves, Juan L.; Gallegos-Rueda, José M.; Mesa, Francisco
2012-03-01
Erythema is a common visual sign of gingivitis. In this work, a new and simple low-cost image capture and analysis method for erythema assessment is proposed. The method is based on digital still images of gingivae and applied on a pixel-by-pixel basis. Multispectral images are acquired with a conventional digital camera and multiplexed LED illumination panels at 460nm and 630nm peak wavelength. An automatic work-flow segments teeth from gingiva regions in the images and creates a map of local blood oxygenation levels, which relates to the presence of erythema. The map is computed from the ratio of the two spectral images. An advantage of the proposed approach is that the whole process is easy to manage by dental health care professionals in clinical environment.
Fast Lossless Compression of Multispectral-Image Data
NASA Technical Reports Server (NTRS)
Klimesh, Matthew
2006-01-01
An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.
MTI science, data products, and ground-data processing overview
NASA Astrophysics Data System (ADS)
Szymanski, John J.; Atkins, William H.; Balick, Lee K.; Borel, Christoph C.; Clodius, William B.; Christensen, R. Wynn; Davis, Anthony B.; Echohawk, J. C.; Galbraith, Amy E.; Hirsch, Karen L.; Krone, James B.; Little, Cynthia K.; McLachlan, Peter M.; Morrison, Aaron; Pollock, Kimberly A.; Pope, Paul A.; Novak, Curtis; Ramsey, Keri A.; Riddle, Emily E.; Rohde, Charles A.; Roussel-Dupre, Diane C.; Smith, Barham W.; Smith, Kathy; Starkovich, Kim; Theiler, James P.; Weber, Paul G.
2001-08-01
The mission of the Multispectral Thermal Imager (MTI) satellite is to demonstrate the efficacy of highly accurate multispectral imaging for passive characterization of urban and industrial areas, as well as sites of environmental interest. The satellite makes top-of-atmosphere radiance measurements that are subsequently processed into estimates of surface properties such as vegetation health, temperatures, material composition and others. The MTI satellite also provides simultaneous data for atmospheric characterization at high spatial resolution. To utilize these data the MTI science program has several coordinated components, including modeling, comprehensive ground-truth measurements, image acquisition planning, data processing and data interpretation and analysis. Algorithms have been developed to retrieve a multitude of physical quantities and these algorithms are integrated in a processing pipeline architecture that emphasizes automation, flexibility and programmability. In addition, the MTI science team has produced detailed site, system and atmospheric models to aid in system design and data analysis. This paper provides an overview of the MTI research objectives, data products and ground data processing.
Feasibility study and quality assessment of unmanned aircraft system-derived multispectral images
NASA Astrophysics Data System (ADS)
Chang, Kuo-Jen
2017-04-01
The purpose of study is to explore the precision and the applicability of UAS-derived multispectral images. In this study, the Micro-MCA6 multispectral camera was mounted on quadcopter. The Micro-MCA6 shoot images synchronized of each single band. By means of geotagged images and control points, the orthomosaic images of each single band generated firstly by 14cm resolution. The multispectral image was merged complete with 6 bands. In order to improve the spatial resolution, the 6 band image fused with 9cm resolution image taken from RGB camera. Quality evaluation of the image is verified of the each single band by using control points and check points. The standard deviations of errors are within 1 to 2 pixel resolution of each band. The quality of the multispectral image is compared with 3 cm resolution orthomosaic RGB image gathered from UAV in the same mission, as well. The standard deviations of errors are within 2 to 3 pixel resolution. The result shows that the errors resulting from the blurry and the band dislocation of the objects edge identification. To the end, the normalized difference vegetation index (NDVI) extracted from the image to explore the condition of vegetation and the nature of the environment. This study demonstrates the feasibility and the capability of the high resolution multispectral images.
NASA Astrophysics Data System (ADS)
Saager, Rolf B.; Baldado, Melissa L.; Rowland, Rebecca A.; Kelly, Kristen M.; Durkin, Anthony J.
2018-04-01
With recent proliferation in compact and/or low-cost clinical multispectral imaging approaches and commercially available components, questions remain whether they adequately capture the requisite spectral content of their applications. We present a method to emulate the spectral range and resolution of a variety of multispectral imagers, based on in-vivo data acquired from spatial frequency domain spectroscopy (SFDS). This approach simulates spectral responses over 400 to 1100 nm. Comparing emulated data with full SFDS spectra of in-vivo tissue affords the opportunity to evaluate whether the sparse spectral content of these imagers can (1) account for all sources of optical contrast present (completeness) and (2) robustly separate and quantify sources of optical contrast (crosstalk). We validate the approach over a range of tissue-simulating phantoms, comparing the SFDS-based emulated spectra against measurements from an independently characterized multispectral imager. Emulated results match the imager across all phantoms (<3 % absorption, <1 % reduced scattering). In-vivo test cases (burn wounds and photoaging) illustrate how SFDS can be used to evaluate different multispectral imagers. This approach provides an in-vivo measurement method to evaluate the performance of multispectral imagers specific to their targeted clinical applications and can assist in the design and optimization of new spectral imaging devices.
LANDSAT-4 multispectral scanner (MSS) subsystem radiometric characterization
NASA Technical Reports Server (NTRS)
Alford, W. (Editor); Barker, J. (Editor); Clark, B. P.; Dasgupta, R.
1983-01-01
The multispectral band scanner (mass) and its spectral characteristics are described and methods are given for relating video digital levels on computer compatible tapes to radiance into the sensor. Topics covered include prelaunch calibration procedures and postlaunch radiometric processng. Examples of current data resident on the MSS image processing system are included. The MSS on LANDSAT 4 is compared with the scanners on earlier LANDSAT satellites.
ADP of multispectral scanner data for land use mapping
NASA Technical Reports Server (NTRS)
Hoffer, R. M.
1971-01-01
The advantages and disadvantages of various remote sensing instrumentation and analysis techniques are reviewed. The use of multispectral scanner data and the automatic data processing techniques are considered. A computer-aided analysis system for remote sensor data is described with emphasis on the image display, statistics processor, wavelength band selection, classification processor, and results display. Advanced techniques in using spectral and temporal data are also considered.
Automated oil spill detection with multispectral imagery
NASA Astrophysics Data System (ADS)
Bradford, Brian N.; Sanchez-Reyes, Pedro J.
2011-06-01
In this publication we present an automated detection method for ocean surface oil, like that which existed in the Gulf of Mexico as a result of the April 20, 2010 Deepwater Horizon drilling rig explosion. Regions of surface oil in airborne imagery are isolated using red, green, and blue bands from multispectral data sets. The oil shape isolation procedure involves a series of image processing functions to draw out the visual phenomenological features of the surface oil. These functions include selective color band combinations, contrast enhancement and histogram warping. An image segmentation process then separates out contiguous regions of oil to provide a raster mask to an analyst. We automate the detection algorithm to allow large volumes of data to be processed in a short time period, which can provide timely oil coverage statistics to response crews. Geo-referenced and mosaicked data sets enable the largest identified oil regions to be mapped to exact geographic coordinates. In our simulation, multispectral imagery came from multiple sources including first-hand data collected from the Gulf. Results of the simulation show the oil spill coverage area as a raster mask, along with histogram statistics of the oil pixels. A rough square footage estimate of the coverage is reported if the image ground sample distance is available.
Evaluation of Chilling Injury in Mangoes Using Multispectral Imaging.
Hashim, Norhashila; Onwude, Daniel I; Osman, Muhamad Syafiq
2018-05-01
Commodities originating from tropical and subtropical climes are prone to chilling injury (CI). This injury could affect the quality and marketing potential of mango after harvest. This will later affect the quality of the produce and subsequent consumer acceptance. In this study, the appearance of CI symptoms in mango was evaluated non-destructively using multispectral imaging. The fruit were stored at 4 °C to induce CI and 12 °C to preserve the quality of the control samples for 4 days before they were taken out and stored at ambient temperature for 24 hr. Measurements using multispectral imaging and standard reference methods were conducted before and after storage. The performance of multispectral imaging was compared using standard reference properties including moisture content (MC), total soluble solids (TSS) content, firmness, pH, and color. Least square support vector machine (LS-SVM) combined with principal component analysis (PCA) were used to discriminate CI samples with those of control and before storage, respectively. The statistical results demonstrated significant changes in the reference quality properties of samples before and after storage. The results also revealed that multispectral parameters have a strong correlation with the reference parameters of L * , a * , TSS, and MC. The MC and L * were found to be the best reference parameters in identifying the severity of CI in mangoes. PCA and LS-SVM analysis indicated that the fruit were successfully classified into their categories, that is, before storage, control, and CI. This indicated that the multispectral imaging technique is feasible for detecting CI in mangoes during postharvest storage and processing. This paper demonstrates a fast, easy, and accurate method of identifying the effect of cold storage on mango, nondestructively. The method presented in this paper can be used industrially to efficiently differentiate different fruits from each other after low temperature storage. © 2018 Institute of Food Technologists®.
Multispectral Palmprint Recognition Using a Quaternion Matrix
Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng
2012-01-01
Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%. PMID:22666049
Multispectral palmprint recognition using a quaternion matrix.
Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng
2012-01-01
Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%.
Contribution of non-negative matrix factorization to the classification of remote sensing images
NASA Astrophysics Data System (ADS)
Karoui, M. S.; Deville, Y.; Hosseini, S.; Ouamri, A.; Ducrot, D.
2008-10-01
Remote sensing has become an unavoidable tool for better managing our environment, generally by realizing maps of land cover using classification techniques. The classification process requires some pre-processing, especially for data size reduction. The most usual technique is Principal Component Analysis. Another approach consists in regarding each pixel of the multispectral image as a mixture of pure elements contained in the observed area. Using Blind Source Separation (BSS) methods, one can hope to unmix each pixel and to perform the recognition of the classes constituting the observed scene. Our contribution consists in using Non-negative Matrix Factorization (NMF) combined with sparse coding as a solution to BSS, in order to generate new images (which are at least partly separated images) using HRV SPOT images from Oran area, Algeria). These images are then used as inputs of a supervised classifier integrating textural information. The results of classifications of these "separated" images show a clear improvement (correct pixel classification rate improved by more than 20%) compared to classification of initial (i.e. non separated) images. These results show the contribution of NMF as an attractive pre-processing for classification of multispectral remote sensing imagery.
Digital image processing for the earth resources technology satellite data.
NASA Technical Reports Server (NTRS)
Will, P. M.; Bakis, R.; Wesley, M. A.
1972-01-01
This paper discusses the problems of digital processing of the large volumes of multispectral image data that are expected to be received from the ERTS program. Correction of geometric and radiometric distortions are discussed and a byte oriented implementation is proposed. CPU timing estimates are given for a System/360 Model 67, and show that a processing throughput of 1000 image sets per week is feasible.
MOVING BEYOND COLOR: THE CASE FOR MULTISPECTRAL IMAGING IN BRIGHTFIELD PATHOLOGY
Cukierski, William J.; Qi, Xin; Foran, David J.
2009-01-01
A multispectral camera is capable of imaging a histologic slide at narrow bandwidths over the range of the visible spectrum. While several uses for multispectral imaging (MSI) have been demonstrated in pathology [1, 2], there is no unified consensus over when and how MSI might benefit automated analysis [3, 4]. In this work, we use a linear-algebra framework to investigate the relationship between the spectral image and its standard-image counterpart. The multispectral “cube” is treated as an extension of a traditional image in a high-dimensional color space. The concept of metamers is introduced and used to derive regions of the visible spectrum where MSI may provide an advantage. Furthermore, histological stains which are amenable to analysis by MSI are reported. We show the Commission internationale de l’éclairage (CIE) 1931 transformation from spectrum to color is non-neighborhood preserving. Empirical results are demonstrated on multispectral images of peripheral blood smears. PMID:19997528
Mapping invasive weeds and their control with spatial information technologies
USDA-ARS?s Scientific Manuscript database
We discuss applications of airborne multispectral digital imaging systems, imaging processing techniques, global positioning systems (GPS), and geographic information systems (GIS) for mapping the invasive weeds giant salvinia (Salvinia molesta) and Brazilian pepper (Schinus terebinthifolius) and fo...
Automatic Hotspot and Sun Glint Detection in UAV Multispectral Images
Ortega-Terol, Damian; Ballesteros, Rocio
2017-01-01
Last advances in sensors, photogrammetry and computer vision have led to high-automation levels of 3D reconstruction processes for generating dense models and multispectral orthoimages from Unmanned Aerial Vehicle (UAV) images. However, these cartographic products are sometimes blurred and degraded due to sun reflection effects which reduce the image contrast and colour fidelity in photogrammetry and the quality of radiometric values in remote sensing applications. This paper proposes an automatic approach for detecting sun reflections problems (hotspot and sun glint) in multispectral images acquired with an Unmanned Aerial Vehicle (UAV), based on a photogrammetric strategy included in a flight planning and control software developed by the authors. In particular, two main consequences are derived from the approach developed: (i) different areas of the images can be excluded since they contain sun reflection problems; (ii) the cartographic products obtained (e.g., digital terrain model, orthoimages) and the agronomical parameters computed (e.g., normalized vegetation index-NVDI) are improved since radiometric defects in pixels are not considered. Finally, an accuracy assessment was performed in order to analyse the error in the detection process, getting errors around 10 pixels for a ground sample distance (GSD) of 5 cm which is perfectly valid for agricultural applications. This error confirms that the precision in the detection of sun reflections can be guaranteed using this approach and the current low-cost UAV technology. PMID:29036930
Automatic Hotspot and Sun Glint Detection in UAV Multispectral Images.
Ortega-Terol, Damian; Hernandez-Lopez, David; Ballesteros, Rocio; Gonzalez-Aguilera, Diego
2017-10-15
Last advances in sensors, photogrammetry and computer vision have led to high-automation levels of 3D reconstruction processes for generating dense models and multispectral orthoimages from Unmanned Aerial Vehicle (UAV) images. However, these cartographic products are sometimes blurred and degraded due to sun reflection effects which reduce the image contrast and colour fidelity in photogrammetry and the quality of radiometric values in remote sensing applications. This paper proposes an automatic approach for detecting sun reflections problems (hotspot and sun glint) in multispectral images acquired with an Unmanned Aerial Vehicle (UAV), based on a photogrammetric strategy included in a flight planning and control software developed by the authors. In particular, two main consequences are derived from the approach developed: (i) different areas of the images can be excluded since they contain sun reflection problems; (ii) the cartographic products obtained (e.g., digital terrain model, orthoimages) and the agronomical parameters computed (e.g., normalized vegetation index-NVDI) are improved since radiometric defects in pixels are not considered. Finally, an accuracy assessment was performed in order to analyse the error in the detection process, getting errors around 10 pixels for a ground sample distance (GSD) of 5 cm which is perfectly valid for agricultural applications. This error confirms that the precision in the detection of sun reflections can be guaranteed using this approach and the current low-cost UAV technology.
A multispectral imaging approach for diagnostics of skin pathologies
NASA Astrophysics Data System (ADS)
Lihacova, Ilze; Derjabo, Aleksandrs; Spigulis, Janis
2013-06-01
Noninvasive multispectral imaging method was applied for different skin pathology such as nevus, basal cell carcinoma, and melanoma diagnostics. Developed melanoma diagnostic parameter, using three spectral bands (540 nm, 650 nm and 950 nm), was calculated for nevus, melanoma and basal cell carcinoma. Simple multispectral diagnostic device was established and applied for skin assessment. Development and application of multispectral diagnostics method described further in this article.
Exploiting physical constraints for multi-spectral exo-planet detection
NASA Astrophysics Data System (ADS)
Thiébaut, Éric; Devaney, Nicholas; Langlois, Maud; Hanley, Kenneth
2016-07-01
We derive a physical model of the on-axis PSF for a high contrast imaging system such as GPI or SPHERE. This model is based on a multi-spectral Taylor series expansion of the diffraction pattern and predicts that the speckles should be a combination of spatial modes with deterministic chromatic magnification and weighting. We propose to remove most of the residuals by fitting this model on a set of images at multiple wavelengths and times. On simulated data, we demonstrate that our approach achieves very good speckle suppression without additional heuristic parameters. The residual speckles1, 2 set the most serious limitation in the detection of exo-planets in high contrast coronographic images provided by instruments such as SPHERE3 at the VLT, GPI4, 5 at Gemini, or SCExAO6 at Subaru. A number of post-processing methods have been proposed to remove as much as possible of the residual speckles while preserving the signal from the planets. These methods exploit the fact that the speckles and the planetary signal have different temporal and spectral behaviors. Some methods like LOCI7 are based on angular differential imaging8 (ADI), spectral differential imaging9, 10 (SDI), or on a combination of ADI and SDI.11 Instead of working on image differences, we propose to tackle the exo-planet detection as an inverse problem where a model of the residual speckles is fit on the set of multi-spectral images and, possibly, multiple exposures. In order to reduce the number of degrees of freedom, we impose specific constraints on the spatio-spectral distribution of stellar speckles. These constraints are deduced from a multi-spectral Taylor series expansion of the diffraction pattern for an on-axis source which implies that the speckles are a combination of spatial modes with deterministic chromatic magnification and weighting. Using simulated data, the efficiency of speckle removal by fitting the proposed multi-spectral model is compared to the result of using an approximation based on the singular value decomposition of the rescaled images. We show how the difficult problem to fitting a bilinear model on the can be solved in practise. The results are promising for further developments including application to real data and joint planet detection in multi-variate data (multi-spectral and multiple exposures images).
Multispectral imaging for biometrics
NASA Astrophysics Data System (ADS)
Rowe, Robert K.; Corcoran, Stephen P.; Nixon, Kristin A.; Ostrom, Robert E.
2005-03-01
Automated identification systems based on fingerprint images are subject to two significant types of error: an incorrect decision about the identity of a person due to a poor quality fingerprint image and incorrectly accepting a fingerprint image generated from an artificial sample or altered finger. This paper discusses the use of multispectral sensing as a means to collect additional information about a finger that significantly augments the information collected using a conventional fingerprint imager based on total internal reflectance. In the context of this paper, "multispectral sensing" is used broadly to denote a collection of images taken under different polarization conditions and illumination configurations, as well as using multiple wavelengths. Background information is provided on conventional fingerprint imaging. A multispectral imager for fingerprint imaging is then described and a means to combine the two imaging systems into a single unit is discussed. Results from an early-stage prototype of such a system are shown.
Fast and accurate image recognition algorithms for fresh produce food safety sensing
NASA Astrophysics Data System (ADS)
Yang, Chun-Chieh; Kim, Moon S.; Chao, Kuanglin; Kang, Sukwon; Lefcourt, Alan M.
2011-06-01
This research developed and evaluated the multispectral algorithms derived from hyperspectral line-scan fluorescence imaging under violet LED excitation for detection of fecal contamination on Golden Delicious apples. The algorithms utilized the fluorescence intensities at four wavebands, 680 nm, 684 nm, 720 nm, and 780 nm, for computation of simple functions for effective detection of contamination spots created on the apple surfaces using four concentrations of aqueous fecal dilutions. The algorithms detected more than 99% of the fecal spots. The effective detection of feces showed that a simple multispectral fluorescence imaging algorithm based on violet LED excitation may be appropriate to detect fecal contamination on fast-speed apple processing lines.
Shallow sea-floor reflectance and water depth derived by unmixing multispectral imagery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bierwirth, P.N.; Lee, T.J.; Burne, R.V.
1993-03-01
A major problem for mapping shallow water zones by the analysis of remotely sensed data is that contrast effects due to water depth obscure and distort the special nature of the substrate. This paper outlines a new method which unmixes the exponential influence of depth in each pixel by employing a mathematical constraint. This leaves a multispectral residual which represents relative substrate reflectance. Input to the process are the raw multispectral data and water attenuation coefficients derived by the co-analysis of known bathymetry and remotely sensed data. Outputs are substrate-reflectance images corresponding to the input bands and a greyscale depthmore » image. The method has been applied in the analysis of Landsat TM data at Hamelin Pool in Shark Bay, Western Australia. Algorithm derived substrate reflectance images for Landsat TM bands 1, 2, and 3 combined in color represent the optimum enhancement for mapping or classifying substrate types. As a result, this color image successfully delineated features, which were obscured in the raw data, such as the distributions of sea-grasses, microbial mats, and sandy area. 19 refs.« less
Multispectral image restoration of historical documents based on LAAMs and mathematical morphology
NASA Astrophysics Data System (ADS)
Lechuga-S., Edwin; Valdiviezo-N., Juan C.; Urcid, Gonzalo
2014-09-01
This research introduces an automatic technique designed for the digital restoration of the damaged parts in historical documents. For this purpose an imaging spectrometer is used to acquire a set of images in the wavelength interval from 400 to 1000 nm. Assuming the presence of linearly mixed spectral pixels registered from the multispectral image, our technique uses two lattice autoassociative memories to extract the set of pure pigments conforming a given document. Through an spectral unmixing analysis, our method produces fractional abundance maps indicating the distributions of each pigment in the scene. These maps are then used to locate cracks and holes in the document under study. The restoration process is performed by the application of a region filling algorithm, based on morphological dilation, followed by a color interpolation to restore the original appearance of the filled areas. This procedure has been successfully applied to the analysis and restoration of three multispectral data sets: two corresponding to artificially superimposed scripts and a real data acquired from a Mexican pre-Hispanic codex, whose restoration results are presented.
Spectral mixture modeling: Further analysis of rock and soil types at the Viking Lander sites
NASA Technical Reports Server (NTRS)
Adams, John B.; Smith, Milton O.
1987-01-01
A new image processing technique was applied to Viking Lander multispectral images. Spectral endmembers were defined that included soil, rock and shade. Mixtures of these endmembers were found to account for nearly all the spectral variance in a Viking Lander image.
Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri
2014-01-01
In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.
Tissue classification for laparoscopic image understanding based on multispectral texture analysis
NASA Astrophysics Data System (ADS)
Zhang, Yan; Wirkert, Sebastian J.; Iszatt, Justin; Kenngott, Hannes; Wagner, Martin; Mayer, Benjamin; Stock, Christian; Clancy, Neil T.; Elson, Daniel S.; Maier-Hein, Lena
2016-03-01
Intra-operative tissue classification is one of the prerequisites for providing context-aware visualization in computer-assisted minimally invasive surgeries. As many anatomical structures are difficult to differentiate in conventional RGB medical images, we propose a classification method based on multispectral image patches. In a comprehensive ex vivo study we show (1) that multispectral imaging data is superior to RGB data for organ tissue classification when used in conjunction with widely applied feature descriptors and (2) that combining the tissue texture with the reflectance spectrum improves the classification performance. Multispectral tissue analysis could thus evolve as a key enabling technique in computer-assisted laparoscopy.
NASA Astrophysics Data System (ADS)
Wu, Yu; Zheng, Lijuan; Xie, Donghai; Zhong, Ruofei
2017-07-01
In this study, the extended morphological attribute profiles (EAPs) and independent component analysis (ICA) were combined for feature extraction of high-resolution multispectral satellite remote sensing images and the regularized least squares (RLS) approach with the radial basis function (RBF) kernel was further applied for the classification. Based on the major two independent components, the geometrical features were extracted using the EAPs method. In this study, three morphological attributes were calculated and extracted for each independent component, including area, standard deviation, and moment of inertia. The extracted geometrical features classified results using RLS approach and the commonly used LIB-SVM library of support vector machines method. The Worldview-3 and Chinese GF-2 multispectral images were tested, and the results showed that the features extracted by EAPs and ICA can effectively improve the accuracy of the high-resolution multispectral image classification, 2% larger than EAPs and principal component analysis (PCA) method, and 6% larger than APs and original high-resolution multispectral data. Moreover, it is also suggested that both the GURLS and LIB-SVM libraries are well suited for the multispectral remote sensing image classification. The GURLS library is easy to be used with automatic parameter selection but its computation time may be larger than the LIB-SVM library. This study would be helpful for the classification application of high-resolution multispectral satellite remote sensing images.
NASA Astrophysics Data System (ADS)
Wicaksono, Pramaditya; Salivian Wisnu Kumara, Ignatius; Kamal, Muhammad; Afif Fauzan, Muhammad; Zhafarina, Zhafirah; Agus Nurswantoro, Dwi; Noviaris Yogyantoro, Rifka
2017-12-01
Although spectrally different, seagrass species may not be able to be mapped from multispectral remote sensing images due to the limitation of their spectral resolution. Therefore, it is important to quantitatively assess the possibility of mapping seagrass species using multispectral images by resampling seagrass species spectra to multispectral bands. Seagrass species spectra were measured on harvested seagrass leaves. Spectral resolution of multispectral images used in this research was adopted from WorldView-2, Quickbird, Sentinel-2A, ASTER VNIR, and Landsat 8 OLI. These images are widely available and can be a good representative and baseline for previous or future remote sensing images. Seagrass species considered in this research are Enhalus acoroides (Ea), Thalassodendron ciliatum (Tc), Thalassia hemprichii (Th), Cymodocea rotundata (Cr), Cymodocea serrulata (Cs), Halodule uninervis (Hu), Halodule pinifolia (Hp), Syringodum isoetifolium (Si), Halophila ovalis (Ho), and Halophila minor (Hm). Multispectral resampling analysis indicate that the resampled spectra exhibit similar shape and pattern with the original spectra but less precise, and they lose the unique absorption feature of seagrass species. Relying on spectral bands alone, multispectral image is not effective in mapping these seagrass species individually, which is shown by the poor and inconsistent result of Spectral Angle Mapper (SAM) classification technique in classifying seagrass species using seagrass species spectra as pure endmember. Only Sentinel-2A produced acceptable classification result using SAM.
A new hyperspectral image compression paradigm based on fusion
NASA Astrophysics Data System (ADS)
Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto
2016-10-01
The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.
Clancy, Neil T.; Stoyanov, Danail; James, David R. C.; Di Marco, Aimee; Sauvage, Vincent; Clark, James; Yang, Guang-Zhong; Elson, Daniel S.
2012-01-01
Sequential multispectral imaging is an acquisition technique that involves collecting images of a target at different wavelengths, to compile a spectrum for each pixel. In surgical applications it suffers from low illumination levels and motion artefacts. A three-channel rigid endoscope system has been developed that allows simultaneous recording of stereoscopic and multispectral images. Salient features on the tissue surface may be tracked during the acquisition in the stereo cameras and, using multiple camera triangulation techniques, this information used to align the multispectral images automatically even though the tissue or camera is moving. This paper describes a detailed validation of the set-up in a controlled experiment before presenting the first in vivo use of the device in a porcine minimally invasive surgical procedure. Multispectral images of the large bowel were acquired and used to extract the relative concentration of haemoglobin in the tissue despite motion due to breathing during the acquisition. Using the stereoscopic information it was also possible to overlay the multispectral information on the reconstructed 3D surface. This experiment demonstrates the ability of this system for measuring blood perfusion changes in the tissue during surgery and its potential use as a platform for other sequential imaging modalities. PMID:23082296
NASA Astrophysics Data System (ADS)
Cheng, Jun-Hu; Jin, Huali; Liu, Zhiwei
2018-01-01
The feasibility of developing a multispectral imaging method using important wavelengths from hyperspectral images selected by genetic algorithm (GA), successive projection algorithm (SPA) and regression coefficient (RC) methods for modeling and predicting protein content in peanut kernel was investigated for the first time. Partial least squares regression (PLSR) calibration model was established between the spectral data from the selected optimal wavelengths and the reference measured protein content ranged from 23.46% to 28.43%. The RC-PLSR model established using eight key wavelengths (1153, 1567, 1972, 2143, 2288, 2339, 2389 and 2446 nm) showed the best predictive results with the coefficient of determination of prediction (R2P) of 0.901, and root mean square error of prediction (RMSEP) of 0.108 and residual predictive deviation (RPD) of 2.32. Based on the obtained best model and image processing algorithms, the distribution maps of protein content were generated. The overall results of this study indicated that developing a rapid and online multispectral imaging system using the feature wavelengths and PLSR analysis is potential and feasible for determination of the protein content in peanut kernels.
NASA Astrophysics Data System (ADS)
Cho, Byoung-Kwan; Kim, Moon S.; Chen, Yud-Ren
2005-11-01
Emerging concerns about safety and security in current mass production of food products necessitate rapid and reliable inspection for contaminant-free products. Diluted fecal residues on poultry processing plant equipment surface, not easily discernable from water by human eye, are contamination sources for poultry carcasses. Development of sensitive detection methods for fecal residues is essential to ensure safe production of poultry carcasses. Hyperspectral imaging techniques have shown good potential for detecting of the presence of fecal and other biological substances on food and processing equipment surfaces. In this study, use of high spatial resolution hyperspectral reflectance and fluorescence imaging (with UV-A excitation) is presented as a tool for selecting a few multispectral bands to detect diluted fecal and ingesta residues on materials used for manufacturing processing equipment. Reflectance and fluorescence imaging methods were compared for potential detection of a range of diluted fecal residues on the surfaces of processing plant equipment. Results showed that low concentrations of poultry feces and ingesta, diluted up to 1:100 by weight with double distilled water, could be detected using hyperspectral fluorescence images with an accuracy of 97.2%. Spectral bands determined in this study could be used for developing a real-time multispectral inspection device for detection of harmful organic residues on processing plant equipment.
Efficient single-pixel multispectral imaging via non-mechanical spatio-spectral modulation.
Li, Ziwei; Suo, Jinli; Hu, Xuemei; Deng, Chao; Fan, Jingtao; Dai, Qionghai
2017-01-27
Combining spectral imaging with compressive sensing (CS) enables efficient data acquisition by fully utilizing the intrinsic redundancies in natural images. Current compressive multispectral imagers, which are mostly based on array sensors (e.g, CCD or CMOS), suffer from limited spectral range and relatively low photon efficiency. To address these issues, this paper reports a multispectral imaging scheme with a single-pixel detector. Inspired by the spatial resolution redundancy of current spatial light modulators (SLMs) relative to the target reconstruction, we design an all-optical spectral splitting device to spatially split the light emitted from the object into several counterparts with different spectrums. Separated spectral channels are spatially modulated simultaneously with individual codes by an SLM. This no-moving-part modulation ensures a stable and fast system, and the spatial multiplexing ensures an efficient acquisition. A proof-of-concept setup is built and validated for 8-channel multispectral imaging within 420~720 nm wavelength range on both macro and micro objects, showing a potential for efficient multispectral imager in macroscopic and biomedical applications.
Novel approach to multispectral image compression on the Internet
NASA Astrophysics Data System (ADS)
Zhu, Yanqiu; Jin, Jesse S.
2000-10-01
Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.
Detecting early stage pressure ulcer on dark skin using multispectral imager
NASA Astrophysics Data System (ADS)
Yi, Dingrong; Kong, Linghua; Sprigle, Stephen; Wang, Fengtao; Wang, Chao; Liu, Fuhan; Adibi, Ali; Tummala, Rao
2010-02-01
We are developing a handheld multispectral imaging device to non-invasively inspect stage I pressure ulcers in dark pigmented skins without the need of touching the patient's skin. This paper reports some preliminary test results of using a proof-of-concept prototype. It also talks about the innovation's impact to traditional multispectral imaging technologies and the fields that will potentially benefit from it.
The development of machine technology processing for earth resource survey
NASA Technical Reports Server (NTRS)
Landgrebe, D. A.
1970-01-01
The following technologies are considered for automatic processing of earth resources data: (1) registration of multispectral and multitemporal images, (2) digital image display systems, (3) data system parameter effects on satellite remote sensing systems, and (4) data compression techniques based on spectral redundancy. The importance of proper spectral band and compression algorithm selections is pointed out.
Solid state high resolution multi-spectral imager CCD test phase
NASA Technical Reports Server (NTRS)
1973-01-01
The program consisted of measuring the performance characteristics of charge coupled linear imaging devices, and a study defining a multispectral imaging system employing advanced solid state photodetection techniques.
NASA Astrophysics Data System (ADS)
Matsui, Daichi; Ishii, Katsunori; Awazu, Kunio
2015-07-01
Atherosclerosis is a primary cause of critical ischemic diseases like heart infarction or stroke. A method that can provide detailed information about the stability of atherosclerotic plaques is required. We focused on spectroscopic techniques that could evaluate the chemical composition of lipid in plaques. A novel angioscope using multispectral imaging at wavelengths around 1200 nm for quantitative evaluation of atherosclerotic plaques was developed. The angioscope consists of a halogen lamp, an indium gallium arsenide (InGaAs) camera, 3 optical band pass filters transmitting wavelengths of 1150, 1200, and 1300 nm, an image fiber having 0.7 mm outer diameter, and an irradiation fiber which consists of 7 multimode fibers. Atherosclerotic plaque phantoms with 100, 60, 20 vol.% of lipid were prepared and measured by the multispectral angioscope. The acquired datasets were processed by spectral angle mapper (SAM) method. As a result, simulated plaque areas in atherosclerotic plaque phantoms that could not be detected by an angioscopic visible image could be clearly enhanced. In addition, quantitative evaluation of atherosclerotic plaque phantoms based on the lipid volume fractions was performed up to 20 vol.%. These results show the potential of a multispectral angioscope at wavelengths around 1200 nm for quantitative evaluation of the stability of atherosclerotic plaques.
Design and fabrication of multispectral optics using expanded glass map
NASA Astrophysics Data System (ADS)
Bayya, Shyam; Gibson, Daniel; Nguyen, Vinh; Sanghera, Jasbinder; Kotov, Mikhail; Drake, Gryphon; Deegan, John; Lindberg, George
2015-06-01
As the desire to have compact multispectral imagers in various DoD platforms is growing, the dearth of multispectral optics is widely felt. With the limited number of material choices for optics, these multispectral imagers are often very bulky and impractical on several weight sensitive platforms. To address this issue, NRL has developed a large set of unique infrared glasses that transmit from 0.9 to > 14 μm in wavelength and expand the glass map for multispectral optics with refractive indices from 2.38 to 3.17. They show a large spread in dispersion (Abbe number) and offer some unique solutions for multispectral optics designs. The new NRL glasses can be easily molded and also fused together to make bonded doublets. A Zemax compatible glass file has been created and is available upon request. In this paper we present some designs, optics fabrication and imaging, all using NRL materials.
A multispectral photon-counting double random phase encoding scheme for image authentication.
Yi, Faliu; Moon, Inkyu; Lee, Yeon H
2014-05-20
In this paper, we propose a new method for color image-based authentication that combines multispectral photon-counting imaging (MPCI) and double random phase encoding (DRPE) schemes. The sparsely distributed information from MPCI and the stationary white noise signal from DRPE make intruder attacks difficult. In this authentication method, the original multispectral RGB color image is down-sampled into a Bayer image. The three types of color samples (red, green and blue color) in the Bayer image are encrypted with DRPE and the amplitude part of the resulting image is photon counted. The corresponding phase information that has nonzero amplitude after photon counting is then kept for decryption. Experimental results show that the retrieved images from the proposed method do not visually resemble their original counterparts. Nevertheless, the original color image can be efficiently verified with statistical nonlinear correlations. Our experimental results also show that different interpolation algorithms applied to Bayer images result in different verification effects for multispectral RGB color images.
Semiconductor Laser Multi-Spectral Sensing and Imaging
Le, Han Q.; Wang, Yang
2010-01-01
Multi-spectral laser imaging is a technique that can offer a combination of the laser capability of accurate spectral sensing with the desirable features of passive multispectral imaging. The technique can be used for detection, discrimination, and identification of objects by their spectral signature. This article describes and reviews the development and evaluation of semiconductor multi-spectral laser imaging systems. Although the method is certainly not specific to any laser technology, the use of semiconductor lasers is significant with respect to practicality and affordability. More relevantly, semiconductor lasers have their own characteristics; they offer excellent wavelength diversity but usually with modest power. Thus, system design and engineering issues are analyzed for approaches and trade-offs that can make the best use of semiconductor laser capabilities in multispectral imaging. A few systems were developed and the technique was tested and evaluated on a variety of natural and man-made objects. It was shown capable of high spectral resolution imaging which, unlike non-imaging point sensing, allows detecting and discriminating objects of interest even without a priori spectroscopic knowledge of the targets. Examples include material and chemical discrimination. It was also shown capable of dealing with the complexity of interpreting diffuse scattered spectral images and produced results that could otherwise be ambiguous with conventional imaging. Examples with glucose and spectral imaging of drug pills were discussed. Lastly, the technique was shown with conventional laser spectroscopy such as wavelength modulation spectroscopy to image a gas (CO). These results suggest the versatility and power of multi-spectral laser imaging, which can be practical with the use of semiconductor lasers. PMID:22315555
Semiconductor laser multi-spectral sensing and imaging.
Le, Han Q; Wang, Yang
2010-01-01
Multi-spectral laser imaging is a technique that can offer a combination of the laser capability of accurate spectral sensing with the desirable features of passive multispectral imaging. The technique can be used for detection, discrimination, and identification of objects by their spectral signature. This article describes and reviews the development and evaluation of semiconductor multi-spectral laser imaging systems. Although the method is certainly not specific to any laser technology, the use of semiconductor lasers is significant with respect to practicality and affordability. More relevantly, semiconductor lasers have their own characteristics; they offer excellent wavelength diversity but usually with modest power. Thus, system design and engineering issues are analyzed for approaches and trade-offs that can make the best use of semiconductor laser capabilities in multispectral imaging. A few systems were developed and the technique was tested and evaluated on a variety of natural and man-made objects. It was shown capable of high spectral resolution imaging which, unlike non-imaging point sensing, allows detecting and discriminating objects of interest even without a priori spectroscopic knowledge of the targets. Examples include material and chemical discrimination. It was also shown capable of dealing with the complexity of interpreting diffuse scattered spectral images and produced results that could otherwise be ambiguous with conventional imaging. Examples with glucose and spectral imaging of drug pills were discussed. Lastly, the technique was shown with conventional laser spectroscopy such as wavelength modulation spectroscopy to image a gas (CO). These results suggest the versatility and power of multi-spectral laser imaging, which can be practical with the use of semiconductor lasers.
VIS-NIR multispectral synchronous imaging pyrometer for high-temperature measurements.
Fu, Tairan; Liu, Jiangfan; Tian, Jibin
2017-06-01
A visible-infrared multispectral synchronous imaging pyrometer was developed for simultaneous, multispectral, two-dimensional high temperature measurements. The multispectral image pyrometer uses prism separation construction in the spectrum range of 650-950 nm and multi-sensor fusion of three CCD sensors for high-temperature measurements. The pyrometer had 650-750 nm, 750-850 nm, and 850-950 nm channels all with the same optical path. The wavelength choice for each channel is flexible with three center wavelengths (700 nm, 810 nm, and 920 nm) with a full width at half maximum of the spectrum of 3 nm used here. The three image sensors were precisely aligned to avoid spectrum artifacts by micro-mechanical adjustments of the sensors relative to each other to position them within a quarter pixel of each other. The pyrometer was calibrated with the standard blackbody source, and the temperature measurement uncertainty was within 0.21 °C-0.99 °C in the temperatures of 600 °C-1800 °C for the blackbody measurements. The pyrometer was then used to measure the leading edge temperatures of a ceramics model exposed to high-enthalpy plasma aerodynamic heating environment to verify the system applicability. The measured temperature ranges are 701-991 °C, 701-1134 °C, and 701-834 °C at the heating transient, steady state, and cooling transient times. A significant temperature gradient (170 °C/mm) was observed away from the leading edge facing the plasma jet during the steady state heating time. The temperature non-uniformity on the surface occurs during the entire aerodynamic heating process. However, the temperature distribution becomes more uniform after the heater is shut down and the experimental model is naturally cooled. This result shows that the multispectral simultaneous image measurement mode provides a wider temperature range for one imaging measurement of high spatial temperature gradients in transient applications.
Digital holographic 3D imaging spectrometry (a review)
NASA Astrophysics Data System (ADS)
Yoshimori, Kyu
2017-09-01
This paper reviews recent progress in the digital holographic 3D imaging spectrometry. The principle of this method is a marriage of incoherent holography and Fourier transform spectroscopy. Review includes principle, procedure of signal processing and experimental results to obtain a multispectral set of 3D images for spatially incoherent, polychromatic objects.
Development of fluorescence based handheld imaging devices for food safety inspection
NASA Astrophysics Data System (ADS)
Lee, Hoyoung; Kim, Moon S.; Chao, Kuanglin; Lefcourt, Alan M.; Chan, Diane E.
2013-05-01
For sanitation inspection in food processing environment, fluorescence imaging can be a very useful method because many organic materials reveal unique fluorescence emissions when excited by UV or violet radiation. Although some fluorescence-based automated inspection instrumentation has been developed for food products, there remains a need for devices that can assist on-site inspectors performing visual sanitation inspection of the surfaces of food processing/handling equipment. This paper reports the development of an inexpensive handheld imaging device designed to visualize fluorescence emissions and intended to help detect the presence of fecal contaminants, organic residues, and bacterial biofilms at multispectral fluorescence emission bands. The device consists of a miniature camera, multispectral (interference) filters, and high power LED illumination. With WiFi communication, live inspection images from the device can be displayed on smartphone or tablet devices. This imaging device could be a useful tool for assessing the effectiveness of sanitation procedures and for helping processors to minimize food safety risks or determine potential problem areas. This paper presents the design and development including evaluation and optimization of the hardware components of the imaging devices.
Multispectral Live-Cell Imaging.
Cohen, Sarah; Valm, Alex M; Lippincott-Schwartz, Jennifer
2018-06-01
Fluorescent proteins and vital dyes are invaluable tools for studying dynamic processes within living cells. However, the ability to distinguish more than a few different fluorescent reporters in a single sample is limited by the spectral overlap of available fluorophores. Here, we present a protocol for imaging live cells labeled with six fluorophores simultaneously. A confocal microscope with a spectral detector is used to acquire images, and linear unmixing algorithms are applied to identify the fluorophores present in each pixel of the image. We describe the application of this method to visualize the dynamics of six different organelles, and to quantify the contacts between organelles. However, this method can be used to image any molecule amenable to tagging with a fluorescent probe. Thus, multispectral live-cell imaging is a powerful tool for systems-level analysis of cellular organization and dynamics. © 2018 by John Wiley & Sons, Inc. Copyright © 2018 John Wiley & Sons, Inc.
An airborne thematic thermal infrared and electro-optical imaging system
NASA Astrophysics Data System (ADS)
Sun, Xiuhong; Shu, Peter
2011-08-01
This paper describes an advanced Airborne Thematic Thermal InfraRed and Electro-Optical Imaging System (ATTIREOIS) and its potential applications. ATTIREOIS sensor payload consists of two sets of advanced Focal Plane Arrays (FPAs) - a broadband Thermal InfraRed Sensor (TIRS) and a four (4) band Multispectral Electro-Optical Sensor (MEOS) to approximate Landsat ETM+ bands 1,2,3,4, and 6, and LDCM bands 2,3,4,5, and 10+11. The airborne TIRS is 3-axis stabilized payload capable of providing 3D photogrammetric images with a 1,850 pixel swathwidth via pushbroom operation. MEOS has a total of 116 million simultaneous sensor counts capable of providing 3 cm spatial resolution multispectral orthophotos for continuous airborne mapping. ATTIREOIS is a complete standalone and easy-to-use portable imaging instrument for light aerial vehicle deployment. Its miniaturized backend data system operates all ATTIREOIS imaging sensor components, an INS/GPS, and an e-Gimbal™ Control Electronic Unit (ECU) with a data throughput of 300 Megabytes/sec. The backend provides advanced onboard processing, performing autonomous raw sensor imagery development, TIRS image track-recovery reconstruction, LWIR/VNIR multi-band co-registration, and photogrammetric image processing. With geometric optics and boresight calibrations, the ATTIREOIS data products are directly georeferenced with an accuracy of approximately one meter. A prototype ATTIREOIS has been configured. Its sample LWIR/EO image data will be presented. Potential applications of ATTIREOIS include: 1) Providing timely and cost-effective, precisely and directly georeferenced surface emissive and solar reflective LWIR/VNIR multispectral images via a private Google Earth Globe to enhance NASA's Earth science research capabilities; and 2) Underflight satellites to support satellite measurement calibration and validation observations.
Computational multispectral video imaging [Invited].
Wang, Peng; Menon, Rajesh
2018-01-01
Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.
Visual enhancement of images of natural resources: Applications in geology
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Neto, G.; Araujo, E. O.; Mascarenhas, N. D. A.; Desouza, R. C. M.
1980-01-01
The principal components technique for use in multispectral scanner LANDSAT data processing results in optimum dimensionality reduction. A powerful tool for MSS IMAGE enhancement, the method provides a maximum impression of terrain ruggedness; this fact makes the technique well suited for geological analysis.
NASA Astrophysics Data System (ADS)
Gao, M.; Li, J.
2018-04-01
Geometric correction is an important preprocessing process in the application of GF4 PMS image. The method of geometric correction that is based on the manual selection of geometric control points is time-consuming and laborious. The more common method, based on a reference image, is automatic image registration. This method involves several steps and parameters. For the multi-spectral sensor GF4 PMS, it is necessary for us to identify the best combination of parameters and steps. This study mainly focuses on the following issues: necessity of Rational Polynomial Coefficients (RPC) correction before automatic registration, base band in the automatic registration and configuration of GF4 PMS spatial resolution.
Topological anomaly detection performance with multispectral polarimetric imagery
NASA Astrophysics Data System (ADS)
Gartley, M. G.; Basener, W.,
2009-05-01
Polarimetric imaging has demonstrated utility for increasing contrast of manmade targets above natural background clutter. Manual detection of manmade targets in multispectral polarimetric imagery can be challenging and a subjective process for large datasets. Analyst exploitation may be improved utilizing conventional anomaly detection algorithms such as RX. In this paper we examine the performance of a relatively new approach to anomaly detection, which leverages topology theory, applied to spectral polarimetric imagery. Detection results for manmade targets embedded in a complex natural background will be presented for both the RX and Topological Anomaly Detection (TAD) approaches. We will also present detailed results examining detection sensitivities relative to: (1) the number of spectral bands, (2) utilization of Stoke's images versus intensity images, and (3) airborne versus spaceborne measurements.
Hu, J H; Wang, Y; Cahill, P T
1997-01-01
This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.
Skin condition measurement by using multispectral imaging system (Conference Presentation)
NASA Astrophysics Data System (ADS)
Jung, Geunho; Kim, Sungchul; Kim, Jae Gwan
2017-02-01
There are a number of commercially available low level light therapy (LLLT) devices in a market, and face whitening or wrinkle reduction is one of targets in LLLT. The facial improvement could be known simply by visual observation of face, but it cannot provide either quantitative data or recognize a subtle change. Clinical diagnostic instruments such as mexameter can provide a quantitative data, but it costs too high for home users. Therefore, we designed a low cost multi-spectral imaging device by adding additional LEDs (470nm, 640nm, white LED, 905nm) to a commercial USB microscope which has two LEDs (395nm, 940nm) as light sources. Among various LLLT skin treatments, we focused on getting melanin and wrinkle information. For melanin index measurements, multi-spectral images of nevus were acquired and melanin index values from color image (conventional method) and from multi-spectral images were compared. The results showed that multi-spectral analysis of melanin index can visualize nevus with a different depth and concentration. A cross section of wrinkle on skin resembles a wedge which can be a source of high frequency components when the skin image is Fourier transformed into a spatial frequency domain map. In that case, the entropy value of the spatial frequency map can represent the frequency distribution which is related with the amount and thickness of wrinkle. Entropy values from multi-spectral images can potentially separate the percentage of thin and shallow wrinkle from thick and deep wrinkle. From the results, we found that this low cost multi-spectral imaging system could be beneficial for home users of LLLT by providing the treatment efficacy in a quantitative way.
Improving multispectral satellite image compression using onboard subpixel registration
NASA Astrophysics Data System (ADS)
Albinet, Mathieu; Camarero, Roberto; Isnard, Maxime; Poulet, Christophe; Perret, Jokin
2013-09-01
Future CNES earth observation missions will have to deal with an ever increasing telemetry data rate due to improvements in resolution and addition of spectral bands. Current CNES image compressors implement a discrete wavelet transform (DWT) followed by a bit plane encoding (BPE) but only on a mono spectral basis and do not profit from the multispectral redundancy of the observed scenes. Recent CNES studies have proven a substantial gain on the achievable compression ratio, +20% to +40% on selected scenarios, by implementing a multispectral compression scheme based on a Karhunen Loeve transform (KLT) followed by the classical DWT+BPE. But such results can be achieved only on perfectly registered bands; a default of registration as low as 0.5 pixel ruins all the benefits of multispectral compression. In this work, we first study the possibility to implement a multi-bands subpixel onboard registration based on registration grids generated on-the-fly by the satellite attitude control system and simplified resampling and interpolation techniques. Indeed bands registration is usually performed on ground using sophisticated techniques too computationally intensive for onboard use. This fully quantized algorithm is tuned to meet acceptable registration performances within stringent image quality criteria, with the objective of onboard real-time processing. In a second part, we describe a FPGA implementation developed to evaluate the design complexity and, by extrapolation, the data rate achievable on a spacequalified ASIC. Finally, we present the impact of this approach on the processing chain not only onboard but also on ground and the impacts on the design of the instrument.
Multispectral image enhancement processing for microsat-borne imager
NASA Astrophysics Data System (ADS)
Sun, Jianying; Tan, Zheng; Lv, Qunbo; Pei, Linlin
2017-10-01
With the rapid development of remote sensing imaging technology, the micro satellite, one kind of tiny spacecraft, appears during the past few years. A good many studies contribute to dwarfing satellites for imaging purpose. Generally speaking, micro satellites weigh less than 100 kilograms, even less than 50 kilograms, which are slightly larger or smaller than the common miniature refrigerators. However, the optical system design is hard to be perfect due to the satellite room and weight limitation. In most cases, the unprocessed data captured by the imager on the microsatellite cannot meet the application need. Spatial resolution is the key problem. As for remote sensing applications, the higher spatial resolution of images we gain, the wider fields we can apply them. Consequently, how to utilize super resolution (SR) and image fusion to enhance the quality of imagery deserves studying. Our team, the Key Laboratory of Computational Optical Imaging Technology, Academy Opto-Electronics, is devoted to designing high-performance microsat-borne imagers and high-efficiency image processing algorithms. This paper addresses a multispectral image enhancement framework for space-borne imagery, jointing the pan-sharpening and super resolution techniques to deal with the spatial resolution shortcoming of microsatellites. We test the remote sensing images acquired by CX6-02 satellite and give the SR performance. The experiments illustrate the proposed approach provides high-quality images.
Principles of computer processing of Landsat data for geologic applications
Taranik, James V.
1978-01-01
The main objectives of computer processing of Landsat data for geologic applications are to improve display of image data to the analyst or to facilitate evaluation of the multispectral characteristics of the data. Interpretations of the data are made from enhanced and classified data by an analyst trained in geology. Image enhancements involve adjustments of brightness values for individual picture elements. Image classification involves determination of the brightness values of picture elements for a particular cover type. Histograms are used to display the range and frequency of occurrence of brightness values. Landsat-1 and -2 data are preprocessed at Goddard Space Flight Center (GSFC) to adjust for the detector response of the multispectral scanner (MSS). Adjustments are applied to minimize the effects of striping, adjust for bad-data lines and line segments and lost individual pixel data. Because illumination conditions and landscape characteristics vary considerably and detector response changes with time, the radiometric adjustments applied at GSFC are seldom perfect and some detector striping remain in Landsat data. Rotation of the Earth under the satellite and movements of the satellite platform introduce geometric distortions in the data that must also be compensated for if image data are to be correctly displayed to the data analyst. Adjustments to Landsat data are made to compensate for variable solar illumination and for atmospheric effects. GeoMetric registration of Landsat data involves determination of the spatial location of a pixel in. the output image and the determination of a new value for the pixel. The general objective of image enhancement is to optimize display of the data to the analyst. Contrast enhancements are employed to expand the range of brightness values in Landsat data so that the data can be efficiently recorded in a manner desired by the analyst. Spatial frequency enhancements are designed to enhance boundaries between features which have subtle differences in brightness values. Ratioing tends to reduce the effects due to topography and it tends to emphasize changes in brightness values between two Landsat bands. Simulated natural color is produced for geologists so that the colors of materials on images appear similar to colors of actual materials in the field. Image classification of Landsat data involves both machine assisted delineation of multispectral patterns in four-dimensional spectral space and identification of machine delineated multispectral patterns that represent particular cover conditions. The geological information derived from an analysis of a multispectral classification is usually related to lithology.
Preliminary Impact Crater Dimensions on 433 Eros from the NEAR Laser Rangefinder and Imager
NASA Technical Reports Server (NTRS)
Barnouin-Jha, O. S.; Garvin, J. B.; Cheng, A. F.; Zuber, M.; Smith, D.; Neumann, G.; Murchie, S.; Veverka, J.; Robinson, M.
2001-01-01
We report preliminary observations obtained from the NEAR Laser Rangefinder (NLR) and NEAR Multispectral Imager (MSI) for approx. 300 craters seen on 433 Eros to address Eros crater formation and degradation processes. Additional information is contained in the original extended abstract.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrie, G.M.; Perry, E.M.; Kirkham, R.R.
1997-09-01
This report describes the work performed at the Pacific Northwest National Laboratory (PNNL) for the U.S. Department of Energy`s Office of Nonproliferation and National Security, Office of Research and Development (NN-20). The work supports the NN-20 Broad Area Search and Analysis, a program initiated by NN-20 to improve the detection and classification of undeclared weapons facilities. Ongoing PNNL research activities are described in three main components: image collection, information processing, and change analysis. The Multispectral Airborne Imaging System, which was developed to collect georeferenced imagery in the visible through infrared regions of the spectrum, and flown on a light aircraftmore » platform, will supply current land use conditions. The image information extraction software (dynamic clustering and end-member extraction) uses imagery, like the multispectral data collected by the PNNL multispectral system, to efficiently generate landcover information. The advanced change detection uses a priori (benchmark) information, current landcover conditions, and user-supplied rules to rank suspect areas by probable risk of undeclared facilities or proliferation activities. These components, both separately and combined, provide important tools for improving the detection of undeclared facilities.« less
High-quality infrared imaging with graphene photodetectors at room temperature.
Guo, Nan; Hu, Weida; Jiang, Tao; Gong, Fan; Luo, Wenjin; Qiu, Weicheng; Wang, Peng; Liu, Lu; Wu, Shiwei; Liao, Lei; Chen, Xiaoshuang; Lu, Wei
2016-09-21
Graphene, a two-dimensional material, is expected to enable broad-spectrum and high-speed photodetection because of its gapless band structure, ultrafast carrier dynamics and high mobility. We demonstrate a multispectral active infrared imaging by using a graphene photodetector based on hybrid response mechanisms at room temperature. The high-quality images with optical resolutions of 418 nm, 657 nm and 877 nm and close-to-theoretical-limit Michelson contrasts of 0.997, 0.994, and 0.996 have been acquired for 565 nm, 1550 nm, and 1815 nm light imaging measurements by using an unbiased graphene photodetector, respectively. Importantly, by carefully analyzing the results of Raman mapping and numerical simulations for the response process, the formation of hybrid photocurrents in graphene detectors is attributed to the synergistic action of photovoltaic and photo-thermoelectric effects. The initial application to infrared imaging will help promote the development of high performance graphene-based infrared multispectral detectors.
Monitoring human melanocytic cell responses to piperine using multispectral imaging
NASA Astrophysics Data System (ADS)
Samatham, Ravikant; Phillips, Kevin G.; Sonka, Julia; Yelma, Aznegashe; Reddy, Neha; Vanka, Meenakshi; Thuillier, Philippe; Soumyanath, Amala; Jacques, Steven
2011-03-01
Vitiligo is a depigmentary disease characterized by melanocyte loss attributed most commonly to autoimmune mechanisms. Currently vitiligo has a high incidence (1% worldwide) but a poor set of treatment options. Piperine, a compound found in black pepper, is a potential treatment for the depigmentary skin disease vitiligo, due to its ability to stimulate mouse epidermal melanocyte proliferation in vitro and in vivo. The present study investigates the use of multispectral imaging and an image processing technique based on local contrast to quantify the stimulatory effects of piperine on human melanocyte proliferation in reconstructed epidermis. We demonstrate the ability of the imaging method to quantify increased pigmentation in response to piperine treatment. The quantization of melanocyte stimulation by the proposed imaging technique illustrates the potential use of this technology to quickly assess therapeutic responses of vitiligo tissue culture models to treatment non-invasively.
An automated procedure for detection of IDP's dwellings using VHR satellite imagery
NASA Astrophysics Data System (ADS)
Jenerowicz, Malgorzata; Kemper, Thomas; Soille, Pierre
2011-11-01
This paper presents the results for the estimation of dwellings structures in Al Salam IDP Camp, Southern Darfur, based on Very High Resolution multispectral satellite images obtained by implementation of Mathematical Morphology analysis. A series of image processing procedures, feature extraction methods and textural analysis have been applied in order to provide reliable information about dwellings structures. One of the issues in this context is related to similarity of the spectral response of thatched dwellings' roofs and the surroundings in the IDP camps, where the exploitation of multispectral information is crucial. This study shows the advantage of automatic extraction approach and highlights the importance of detailed spatial and spectral information analysis based on multi-temporal dataset. The additional data fusion of high-resolution panchromatic band with lower resolution multispectral bands of WorldView-2 satellite has positive influence on results and thereby can be useful for humanitarian aid agency, providing support of decisions and estimations of population especially in situations when frequent revisits by space imaging system are the only possibility of continued monitoring.
Fourier Spectral Filter Array for Optimal Multispectral Imaging.
Jia, Jie; Barnard, Kenneth J; Hirakawa, Keigo
2016-04-01
Limitations to existing multispectral imaging modalities include speed, cost, range, spatial resolution, and application-specific system designs that lack versatility of the hyperspectral imaging modalities. In this paper, we propose a novel general-purpose single-shot passive multispectral imaging modality. Central to this design is a new type of spectral filter array (SFA) based not on the notion of spatially multiplexing narrowband filters, but instead aimed at enabling single-shot Fourier transform spectroscopy. We refer to this new SFA pattern as Fourier SFA, and we prove that this design solves the problem of optimally sampling the hyperspectral image data.
Multi-spectral endogenous fluorescence imaging for bacterial differentiation
NASA Astrophysics Data System (ADS)
Chernomyrdin, Nikita V.; Babayants, Margarita V.; Korotkov, Oleg V.; Kudrin, Konstantin G.; Rimskaya, Elena N.; Shikunova, Irina A.; Kurlov, Vladimir N.; Cherkasova, Olga P.; Komandin, Gennady A.; Reshetov, Igor V.; Zaytsev, Kirill I.
2017-07-01
In this paper, the multi-spectral endogenous fluorescence imaging was implemented for bacterial differentiation. The fluorescence imaging was performed using a digital camera equipped with a set of visual bandpass filters. Narrowband 365 nm ultraviolet radiation passed through a beam homogenizer was used to excite the sample fluorescence. In order to increase a signal-to-noise ratio and suppress a non-fluorescence background in images, the intensity of the UV excitation was modulated using a mechanical chopper. The principal components were introduced for differentiating the samples of bacteria based on the multi-spectral endogenous fluorescence images.
NASA Astrophysics Data System (ADS)
Taruttis, Adrian; Razansky, Daniel; Ntziachristos, Vasilis
2012-02-01
Optoacoustic imaging has enabled the visualization of optical contrast at high resolutions in deep tissue. Our Multispectral optoacoustic tomography (MSOT) imaging results reveal internal tissue heterogeneity, where the underlying distribution of specific endogenous and exogenous sources of absorption can be resolved in detail. Technical advances in cardiac imaging allow motion-resolved multispectral measurements of the heart, opening the way for studies of cardiovascular disease. We further demonstrate the fast characterization of the pharmacokinetic profiles of lightabsorbing agents. Overall, our MSOT findings indicate new possibilities in high resolution imaging of functional and molecular parameters.
Image processing methods for quantitatively detecting soybean rust from multispectral images
USDA-ARS?s Scientific Manuscript database
Soybean rust, caused by Phakopsora pachyrhizi, is one of the most destructive diseases for soybean production. It often causes significant yield loss and may rapidly spread from field to field through airborne urediniospores. In order to implement timely fungicide treatments for the most effective c...
USDA-ARS?s Scientific Manuscript database
The U. S. Department of Agriculture, Agricultural Research Service has been developing a method and system to detect fecal contamination on processed poultry carcasses with hyperspectral and multispectral imaging systems. The patented method utilizes a three step approach to contaminant detection. S...
NASA Astrophysics Data System (ADS)
Yong, Sang-Soon; Ra, Sung-Woong
2007-10-01
Multi-Spectral Camera(MSC) is a main payload on the KOMPSAT-2 satellite to perform the earth remote sensing. The MSC instrument has one(1) channel for panchromatic imaging and four(4) channel for multi-spectral imaging covering the spectral range from 450nm to 900nm using TDI CCD Focal Plane Array (FPA). The instrument images the earth using a push-broom motion with a swath width of 15 km and a ground sample distance (GSD) of 1 m over the entire field of view (FOV) at altitude 685 Km. The instrument is designed to have an on-orbit operation duty cycle of 20% over the mission lifetime of 3 years with the functions of programmable gain/ offset and on-board image data compression/ storage. The compression method on KOMPSAT-2 MSC was selected and used to match EOS input rate and PDTS output data rate on MSC image data chain. At once the MSC performance was carefully handled to minimize any degradation so that it was analyzed and restored in KGS(KOMPSAT Ground Station) during LEOP and Cal./Val.(Calibration and Validation) phase. In this paper, on-orbit image data chain in MSC and image data processing on KGS including general MSC description is briefly described. The influences on image performance between on-board compression algorithms and between performance restoration methods in ground station are analyzed, and the relation between both methods is to be analyzed and discussed.
The Multispectral Imaging Science Working Group. Volume 3: Appendices
NASA Technical Reports Server (NTRS)
Cox, S. C. (Editor)
1982-01-01
The status and technology requirements for using multispectral sensor imagery in geographic, hydrologic, and geologic applications are examined. Critical issues in image and information science are identified.
Sandison, David R.; Platzbecker, Mark R.; Descour, Michael R.; Armour, David L.; Craig, Marcus J.; Richards-Kortum, Rebecca
1999-01-01
A multispectral imaging probe delivers a range of wavelengths of excitation light to a target and collects a range of expressed light wavelengths. The multispectral imaging probe is adapted for mobile use and use in confined spaces, and is sealed against the effects of hostile environments. The multispectral imaging probe comprises a housing that defines a sealed volume that is substantially sealed from the surrounding environment. A beam splitting device mounts within the sealed volume. Excitation light is directed to the beam splitting device, which directs the excitation light to a target. Expressed light from the target reaches the beam splitting device along a path coaxial with the path traveled by the excitation light from the beam splitting device to the target. The beam splitting device directs expressed light to a collection subsystem for delivery to a detector.
Sandison, D.R.; Platzbecker, M.R.; Descour, M.R.; Armour, D.L.; Craig, M.J.; Richards-Kortum, R.
1999-07-27
A multispectral imaging probe delivers a range of wavelengths of excitation light to a target and collects a range of expressed light wavelengths. The multispectral imaging probe is adapted for mobile use and use in confined spaces, and is sealed against the effects of hostile environments. The multispectral imaging probe comprises a housing that defines a sealed volume that is substantially sealed from the surrounding environment. A beam splitting device mounts within the sealed volume. Excitation light is directed to the beam splitting device, which directs the excitation light to a target. Expressed light from the target reaches the beam splitting device along a path coaxial with the path traveled by the excitation light from the beam splitting device to the target. The beam splitting device directs expressed light to a collection subsystem for delivery to a detector. 8 figs.
Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong
2016-07-01
Computer based diagnosis of Alzheimer's disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer's disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhateja, Vikrant, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn; Moin, Aisha; Srivastava, Anuja
Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Componentmore » Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).« less
Multispectral image fusion for illumination-invariant palmprint recognition
Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied. PMID:28558064
Multispectral image fusion for illumination-invariant palmprint recognition.
Lu, Longbin; Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied.
Multispectral Wavefronts Retrieval in Digital Holographic Three-Dimensional Imaging Spectrometry
NASA Astrophysics Data System (ADS)
Yoshimori, Kyu
2010-04-01
This paper deals with a recently developed passive interferometric technique for retrieving a set of spectral components of wavefronts that are propagating from a spatially incoherent, polychromatic object. The technique is based on measurement of 5-D spatial coherence function using a suitably designed interferometer. By applying signal processing, including aperture synthesis and spectral decomposition, one may obtains a set of wavefronts of different spectral bands. Since each wavefront is equivalent to the complex Fresnel hologram at a particular spectrum of the polychromatic object, application of the conventional Fresnel transform yields 3-D image of different spectrum. Thus, this technique of multispectral wavefronts retrieval provides a new type of 3-D imaging spectrometry based on a fully passive interferometry. Experimental results are also shown to demonstrate the validity of the method.
Computer generated maps from digital satellite data - A case study in Florida
NASA Technical Reports Server (NTRS)
Arvanitis, L. G.; Reich, R. M.; Newburne, R.
1981-01-01
Ground cover maps are important tools to a wide array of users. Over the past three decades, much progress has been made in supplementing planimetric and topographic maps with ground cover details obtained from aerial photographs. The present investigation evaluates the feasibility of using computer maps of ground cover from satellite input tapes. Attention is given to the selection of test sites, a satellite data processing system, a multispectral image analyzer, general purpose computer-generated maps, the preliminary evaluation of computer maps, a test for areal correspondence, the preparation of overlays and acreage estimation of land cover types on the Landsat computer maps. There is every indication to suggest that digital multispectral image processing systems based on Landsat input data will play an increasingly important role in pattern recognition and mapping land cover in the years to come.
Multispectral imaging method and apparatus
Sandison, D.R.; Platzbecker, M.R.; Vargo, T.D.; Lockhart, R.R.; Descour, M.R.; Richards-Kortum, R.
1999-07-06
A multispectral imaging method and apparatus are described which are adapted for use in determining material properties, especially properties characteristic of abnormal non-dermal cells. A target is illuminated with a narrow band light beam. The target expresses light in response to the excitation. The expressed light is collected and the target's response at specific response wavelengths to specific excitation wavelengths is measured. From the measured multispectral response the target's properties can be determined. A sealed, remote probe and robust components can be used for cervical imaging. 5 figs.
Multispectral imaging method and apparatus
Sandison, David R.; Platzbecker, Mark R.; Vargo, Timothy D.; Lockhart, Randal R.; Descour, Michael R.; Richards-Kortum, Rebecca
1999-01-01
A multispectral imaging method and apparatus adapted for use in determining material properties, especially properties characteristic of abnormal non-dermal cells. A target is illuminated with a narrow band light beam. The target expresses light in response to the excitation. The expressed light is collected and the target's response at specific response wavelengths to specific excitation wavelengths is measured. From the measured multispectral response the target's properties can be determined. A sealed, remote probe and robust components can be used for cervical imaging
NASA Astrophysics Data System (ADS)
McMackin, Lenore; Herman, Matthew A.; Weston, Tyler
2016-02-01
We present the design of a multi-spectral imager built using the architecture of the single-pixel camera. The architecture is enabled by the novel sampling theory of compressive sensing implemented optically using the Texas Instruments DLP™ micro-mirror array. The array not only implements spatial modulation necessary for compressive imaging but also provides unique diffractive spectral features that result in a multi-spectral, high-spatial resolution imager design. The new camera design provides multi-spectral imagery in a wavelength range that extends from the visible to the shortwave infrared without reduction in spatial resolution. In addition to the compressive imaging spectrometer design, we present a diffractive model of the architecture that allows us to predict a variety of detailed functional spatial and spectral design features. We present modeling results, architectural design and experimental results that prove the concept.
Use of spectral imaging for documentation of skin parameters in face lift procedure
NASA Astrophysics Data System (ADS)
Ruvolo, Eduardo C., Jr.; Bargo, Paulo R.; Dietz, Tim; Scamuffa, Robin; Shoemaker, Kurt; DiBernardo, Barry; Kollias, Nikiforos
2010-02-01
In rhytidectomy the postoperative edema (swelling) and ecchymosis (bruising) can influence the cosmetic results. Evaluation of edema has typically been performed by visual inspection by a trained physician using a fourlevel or, more commonly, a two-level grading(1). Few instruments exist capable of quantitatively assessing edema and ecchymosis in skin. Here we demonstrate that edema and ecchymosis can be objectively quantitated in vivo by a multispectral clinical imaging system (MSCIS). After a feasibility study of induced stasis to the forearms of volunteers and a benchtop study of an edema model, five subjects undergoing rhytidectomy were recruited for a clinical study and multispectral images were taken approximately at days 0, 1, 3, 6, 8, 10, 15, 22 and 29 (according with the day of their visit). Apparent concentrations of oxy-hemoglobin, deoxy-hemoglobin (ecchymosis), melanin, scattering and water (edema) were calculated for each pixel of a spectral image stack. From the blue channel on cross-polarized images bilirubin was extracted. These chromophore maps are two-dimensional quantitative representations of the involved skin areas that demonstrated characteristics of the recovery process of the patient after the procedure. We conclude that multispectral imaging can be a valuable noninvasive tool in the study of edema and ecchymosis and can be used to document these chromophores in vivo and determine the efficacy of treatments in a clinical setting.
Intelligent multi-spectral IR image segmentation
NASA Astrophysics Data System (ADS)
Lu, Thomas; Luong, Andrew; Heim, Stephen; Patel, Maharshi; Chen, Kang; Chao, Tien-Hsin; Chow, Edward; Torres, Gilbert
2017-05-01
This article presents a neural network based multi-spectral image segmentation method. A neural network is trained on the selected features of both the objects and background in the longwave (LW) Infrared (IR) images. Multiple iterations of training are performed until the accuracy of the segmentation reaches satisfactory level. The segmentation boundary of the LW image is used to segment the midwave (MW) and shortwave (SW) IR images. A second neural network detects the local discontinuities and refines the accuracy of the local boundaries. This article compares the neural network based segmentation method to the Wavelet-threshold and Grab-Cut methods. Test results have shown increased accuracy and robustness of this segmentation scheme for multi-spectral IR images.
Spatial arrangement of color filter array for multispectral image acquisition
NASA Astrophysics Data System (ADS)
Shrestha, Raju; Hardeberg, Jon Y.; Khan, Rahat
2011-03-01
In the past few years there has been a significant volume of research work carried out in the field of multispectral image acquisition. The focus of most of these has been to facilitate a type of multispectral image acquisition systems that usually requires multiple subsequent shots (e.g. systems based on filter wheels, liquid crystal tunable filters, or active lighting). Recently, an alternative approach for one-shot multispectral image acquisition has been proposed; based on an extension of the color filter array (CFA) standard to produce more than three channels. We can thus introduce the concept of multispectral color filter array (MCFA). But this field has not been much explored, particularly little focus has been given in developing systems which focuses on the reconstruction of scene spectral reflectance. In this paper, we have explored how the spatial arrangement of multispectral color filter array affects the acquisition accuracy with the construction of MCFAs of different sizes. We have simulated acquisitions of several spectral scenes using different number of filters/channels, and compared the results with those obtained by the conventional regular MCFA arrangement, evaluating the precision of the reconstructed scene spectral reflectance in terms of spectral RMS error, and colorimetric ▵E*ab color differences. It has been found that the precision and the the quality of the reconstructed images are significantly influenced by the spatial arrangement of the MCFA and the effect will be more and more prominent with the increase in the number of channels. We believe that MCFA-based systems can be a viable alternative for affordable acquisition of multispectral color images, in particular for applications where spatial resolution can be traded off for spectral resolution. We have shown that the spatial arrangement of the array is an important design issue.
Generalization of the Lyot filter and its application to snapshot spectral imaging.
Gorman, Alistair; Fletcher-Holmes, David William; Harvey, Andrew Robert
2010-03-15
A snapshot multi-spectral imaging technique is described which employs multiple cascaded birefringent interferometers to simultaneously spectrally filter and demultiplex multiple spectral images onto a single detector array. Spectral images are recorded directly without the need for inversion and without rejection of light and so the technique offers the potential for high signal-to-noise ratio. An example of an eight-band multi-spectral movie sequence is presented; we believe this is the first such demonstration of a technique able to record multi-spectral movie sequences without the need for computer reconstruction.
Cheng, Weiwei; Sun, Da-Wen; Pu, Hongbin; Wei, Qingyi
2017-04-15
The feasibility of hyperspectral imaging (HSI) (400-1000nm) for tracing the chemical spoilage extent of the raw meat used for two kinds of processed meats was investigated. Calibration models established separately for salted and cooked meats using full wavebands showed good results with the determination coefficient in prediction (R 2 P ) of 0.887 and 0.832, respectively. For simplifying the calibration models, two variable selection methods were used and compared. The results showed that genetic algorithm-partial least squares (GA-PLS) with as much continuous wavebands selected as possible always had better performance. The potential of HSI to develop one multispectral system for simultaneously tracing the chemical spoilage extent of the two kinds of processed meats was also studied. Good result with an R 2 P of 0.854 was obtained using GA-PLS as the dimension reduction method, which was thus used to visualize total volatile base nitrogen (TVB-N) contents corresponding to each pixel of the image. Copyright © 2016 Elsevier Ltd. All rights reserved.
Interactive digital image manipulation system
NASA Technical Reports Server (NTRS)
Henze, J.; Dezur, R.
1975-01-01
The system is designed for manipulation, analysis, interpretation, and processing of a wide variety of image data. LANDSAT (ERTS) and other data in digital form can be input directly into the system. Photographic prints and transparencies are first converted to digital form with an on-line high-resolution microdensitometer. The system is implemented on a Hewlett-Packard 3000 computer with 128 K bytes of core memory and a 47.5 megabyte disk. It includes a true color display monitor, with processing memories, graphics overlays, and a movable cursor. Image data formats are flexible so that there is no restriction to a given set of remote sensors. Conversion between data types is available to provide a basis for comparison of the various data. Multispectral data is fully supported, and there is no restriction on the number of dimensions. In this way multispectral data collected at more than one point in time may simply be treated as a data collected with twice (three times, etc.) the number of sensors. There are various libraries of functions available to the user: processing functions, display functions, system functions, and earth resources applications functions.
Integrative Multi-Spectral Sensor Device for Far-Infrared and Visible Light Fusion
NASA Astrophysics Data System (ADS)
Qiao, Tiezhu; Chen, Lulu; Pang, Yusong; Yan, Gaowei
2018-06-01
Infrared and visible light image fusion technology is a hot spot in the research of multi-sensor fusion technology in recent years. Existing infrared and visible light fusion technologies need to register before fusion because of using two cameras. However, the application effect of the registration technology has yet to be improved. Hence, a novel integrative multi-spectral sensor device is proposed for infrared and visible light fusion, and by using the beam splitter prism, the coaxial light incident from the same lens is projected to the infrared charge coupled device (CCD) and visible light CCD, respectively. In this paper, the imaging mechanism of the proposed sensor device is studied with the process of the signals acquisition and fusion. The simulation experiment, which involves the entire process of the optic system, signal acquisition, and signal fusion, is constructed based on imaging effect model. Additionally, the quality evaluation index is adopted to analyze the simulation result. The experimental results demonstrate that the proposed sensor device is effective and feasible.
NASA Astrophysics Data System (ADS)
Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke
2008-08-01
A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.
Single sensor that outputs narrowband multispectral images
Kong, Linghua; Yi, Dingrong; Sprigle, Stephen; Wang, Fengtao; Wang, Chao; Liu, Fuhan; Adibi, Ali; Tummala, Rao
2010-01-01
We report the work of developing a hand-held (or miniaturized), low-cost, stand-alone, real-time-operation, narrow bandwidth multispectral imaging device for the detection of early stage pressure ulcers. PMID:20210418
Application of digital image processing techniques to astronomical imagery 1980
NASA Technical Reports Server (NTRS)
Lorre, J. J.
1981-01-01
Topics include: (1) polar coordinate transformations (M83); (2) multispectral ratios (M82); (3) maximum entropy restoration (M87); (4) automated computation of stellar magnitudes in nebulosity; (5) color and polarization; (6) aliasing.
Investigation related to multispectral imaging systems
NASA Technical Reports Server (NTRS)
Nalepka, R. F.; Erickson, J. D.
1974-01-01
A summary of technical progress made during a five year research program directed toward the development of operational information systems based on multispectral sensing and the use of these systems in earth-resource survey applications is presented. Efforts were undertaken during this program to: (1) improve the basic understanding of the many facets of multispectral remote sensing, (2) develop methods for improving the accuracy of information generated by remote sensing systems, (3) improve the efficiency of data processing and information extraction techniques to enhance the cost-effectiveness of remote sensing systems, (4) investigate additional problems having potential remote sensing solutions, and (5) apply the existing and developing technology for specific users and document and transfer that technology to the remote sensing community.
FRIT characterized hierarchical kernel memory arrangement for multiband palmprint recognition
NASA Astrophysics Data System (ADS)
Kisku, Dakshina R.; Gupta, Phalguni; Sing, Jamuna K.
2015-10-01
In this paper, we present a hierarchical kernel associative memory (H-KAM) based computational model with Finite Ridgelet Transform (FRIT) representation for multispectral palmprint recognition. To characterize a multispectral palmprint image, the Finite Ridgelet Transform is used to achieve a very compact and distinctive representation of linear singularities while it also captures the singularities along lines and edges. The proposed system makes use of Finite Ridgelet Transform to represent multispectral palmprint image and it is then modeled by Kernel Associative Memories. Finally, the recognition scheme is thoroughly tested with a benchmarking multispectral palmprint database CASIA. For recognition purpose a Bayesian classifier is used. The experimental results exhibit robustness of the proposed system under different wavelengths of palm image.
Schwartzkopf, Wade C; Bovik, Alan C; Evans, Brian L
2005-12-01
Traditional chromosome imaging has been limited to grayscale images, but recently a 5-fluorophore combinatorial labeling technique (M-FISH) was developed wherein each class of chromosomes binds with a different combination of fluorophores. This results in a multispectral image, where each class of chromosomes has distinct spectral components. In this paper, we develop new methods for automatic chromosome identification by exploiting the multispectral information in M-FISH chromosome images and by jointly performing chromosome segmentation and classification. We (1) develop a maximum-likelihood hypothesis test that uses multispectral information, together with conventional criteria, to select the best segmentation possibility; (2) use this likelihood function to combine chromosome segmentation and classification into a robust chromosome identification system; and (3) show that the proposed likelihood function can also be used as a reliable indicator of errors in segmentation, errors in classification, and chromosome anomalies, which can be indicators of radiation damage, cancer, and a wide variety of inherited diseases. We show that the proposed multispectral joint segmentation-classification method outperforms past grayscale segmentation methods when decomposing touching chromosomes. We also show that it outperforms past M-FISH classification techniques that do not use segmentation information.
Lossless, Multi-Spectral Data Compressor for Improved Compression for Pushbroom-Type Instruments
NASA Technical Reports Server (NTRS)
Klimesh, Matthew
2008-01-01
A low-complexity lossless algorithm for compression of multispectral data has been developed that takes into account pushbroom-type multispectral imagers properties in order to make the file compression more effective.
Moody, Daniela; Wohlberg, Brendt
2018-01-02
An approach for land cover classification, seasonal and yearly change detection and monitoring, and identification of changes in man-made features may use a clustering of sparse approximations (CoSA) on sparse representations in learned dictionaries. The learned dictionaries may be derived using efficient convolutional sparse coding to build multispectral or hyperspectral, multiresolution dictionaries that are adapted to regional satellite image data. Sparse image representations of images over the learned dictionaries may be used to perform unsupervised k-means clustering into land cover categories. The clustering process behaves as a classifier in detecting real variability. This approach may combine spectral and spatial textural characteristics to detect geologic, vegetative, hydrologic, and man-made features, as well as changes in these features over time.
Multispectral Landsat images of Antartica
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucchitta, B.K.; Bowell, J.A.; Edwards, K.L.
1988-01-01
The U.S. Geological Survey has a program to map Antarctica by using colored, digitally enhanced Landsat multispectral scanner images to increase existing map coverage and to improve upon previously published Landsat maps. This report is a compilation of images and image mosaic that covers four complete and two partial 1:250,000-scale quadrangles of the McMurdo Sound region.
USDA-ARS?s Scientific Manuscript database
This research developed a multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet/blue LED excitation for detection of fecal contamination on Golden Delicious apples. Using a hyperspectral line-scan imaging system consisting of an EMCCD camera, spectrograph, an...
Detection of sudden death syndrome using a multispectral imaging sensor
USDA-ARS?s Scientific Manuscript database
Sudden death syndrome (SDS), caused by the fungus Fusarium solani f. sp. glycines, is a widespread mid- to late-season disease with distinctive foliar symptoms. This paper reported the development of an image analysis based method to detect SDS using a multispectral image sensor. A hue, saturation a...
ERIC Educational Resources Information Center
Aviation/Space, 1982
1982-01-01
Spurred by National Aeronautics and Space Administration (NASA) technological advances, a budding industry is manufacturing equipment and providing services toward better management of earth's resources. Topics discussed include image processing, multispectral photography, ground use sensor, and weather data receiver. (Author/JN)
NASA Technical Reports Server (NTRS)
1978-01-01
NASA remote sensing technology is being employed in archeological studies of the Anasazi Indians, who lived in New Mexico one thousand years ago. Under contract with the National Park Service, NASA's Technology Applications Center at the University of New Mexico is interpreting multispectral scanner data and demonstrating how aerospace scanning techniques can uncover features of prehistoric ruins not visible in conventional aerial photographs. The Center's initial study focused on Chaco Canyon, a pre-Columbia Anasazi site in northeastern New Mexico. Chaco Canyon is a national monument and it has been well explored on the ground and by aerial photography. But the National Park Service was interested in the potential of multispectral scanning for producing evidence of prehistoric roads, field patterns and dwelling areas not discernible in aerial photographs. The multispectral scanner produces imaging data in the invisible as well as the visible portions of the spectrum. This data is converted to pictures which bring out features not visible to the naked eye or to cameras. The Technology Applications Center joined forces with Bendix Aerospace Systems Division, Ann Arbor, Michigan, which provided a scanner-equipped airplane for mapping the Chaco Canyon area. The NASA group processed the scanner images and employed computerized image enhancement techniques to bring out additional detail.
Radiometric sensitivity comparisons of multispectral imaging systems
NASA Technical Reports Server (NTRS)
Lu, Nadine C.; Slater, Philip N.
1989-01-01
Multispectral imaging systems provide much of the basic data used by the land and ocean civilian remote-sensing community. There are numerous multispectral imaging systems which have been and are being developed. A common way to compare the radiometric performance of these systems is to examine their noise-equivalent change in reflectance, NE Delta-rho. The NE Delta-rho of a system is the reflectance difference that is equal to the noise in the recorded signal. A comparison is made of the noise equivalent change in reflectance of seven different multispectral imaging systems (AVHRR, AVIRIS, ETM, HIRIS, MODIS-N, SPOT-1, HRV, and TM) for a set of three atmospheric conditions (continental aerosol with 23-km visibility, continental aerosol with 5-km visibility, and a Rayleigh atmosphere), five values of ground reflectance (0.01, 0.10, 0.25, 0.50, and 1.00), a nadir viewing angle, and a solar zenith angle of 45 deg.
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Sig-NganLam, Nina; Quattrochi, Dale A.
2004-01-01
The accuracy of traditional multispectral maximum-likelihood image classification is limited by the skewed statistical distributions of reflectances from the complex heterogenous mixture of land cover types in urban areas. This work examines the utility of local variance, fractal dimension and Moran's I index of spatial autocorrelation in segmenting multispectral satellite imagery. Tools available in the Image Characterization and Modeling System (ICAMS) were used to analyze Landsat 7 imagery of Atlanta, Georgia. Although segmentation of panchromatic images is possible using indicators of spatial complexity, different land covers often yield similar values of these indices. Better results are obtained when a surface of local fractal dimension or spatial autocorrelation is combined as an additional layer in a supervised maximum-likelihood multispectral classification. The addition of fractal dimension measures is particularly effective at resolving land cover classes within urbanized areas, as compared to per-pixel spectral classification techniques.
Early Results from the Odyssey THEMIS Investigation
NASA Technical Reports Server (NTRS)
Christensen, Philip R.; Bandfield, Joshua L.; Bell, James F., III; Hamilton, Victoria E.; Ivanov, Anton; Jakosky, Bruce M.; Kieffer, Hugh H.; Lane, Melissa D.; Malin, Michael C.; McConnochie, Timothy
2003-01-01
The Thermal Emission Imaging System (THEMIS) began studying the surface and atmosphere of Mars in February, 2002 using thermal infrared (IR) multi-spectral imaging between 6.5 and 15 m, and visible/near-IR images from 450 to 850 nm. The infrared observations continue a long series of spacecraft observations of Mars, including the Mariner 6/7 Infrared Spectrometer, the Mariner 9 Infrared Interferometer Spectrometer (IRIS), the Viking Infrared Thermal Mapper (IRTM) investigations, the Phobos Termoscan, and the Mars Global Surveyor Thermal Emission Spectrometer (MGS TES). The THEMIS investigation's specific objectives are to: (1) determine the mineralogy of localized deposits associated with hydrothermal or sub-aqueous environments, and to identify future landing sites likely to represent these environments; (2) search for thermal anomalies associated with active sub-surface hydrothermal systems; (3) study small-scale geologic processes and landing site characteristics using morphologic and thermophysical properties; (4) investigate polar cap processes at all seasons; and (5) provide a high spatial resolution link to the global hyperspectral mineral mapping from the TES investigation. THEMIS provides substantially higher spatial resolution IR multi-spectral images to complement TES hyperspectral (143-band) global mapping, and regional visible imaging at scales intermediate between the Viking and MGS cameras.
Liu, Jinxia; Cao, Yue; Wang, Qiu; Pan, Wenjuan; Ma, Fei; Liu, Changhong; Chen, Wei; Yang, Jianbo; Zheng, Lei
2016-01-01
Water-injected beef has aroused public concern as a major food-safety issue in meat products. In the study, the potential of multispectral imaging analysis in the visible and near-infrared (405-970 nm) regions was evaluated for identifying water-injected beef. A multispectral vision system was used to acquire images of beef injected with up to 21% content of water, and partial least squares regression (PLSR) algorithm was employed to establish prediction model, leading to quantitative estimations of actual water increase with a correlation coefficient (r) of 0.923. Subsequently, an optimized model was achieved by integrating spectral data with feature information extracted from ordinary RGB data, yielding better predictions (r = 0.946). Moreover, the prediction equation was transferred to each pixel within the images for visualizing the distribution of actual water increase. These results demonstrate the capability of multispectral imaging technology as a rapid and non-destructive tool for the identification of water-injected beef. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sousa, Daniel; Small, Christopher
2018-02-14
Planned hyperspectral satellite missions and the decreased revisit time of multispectral imaging offer the potential for data fusion to leverage both the spectral resolution of hyperspectral sensors and the temporal resolution of multispectral constellations. Hyperspectral imagery can also be used to better understand fundamental properties of multispectral data. In this analysis, we use five flight lines from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) archive with coincident Landsat 8 acquisitions over a spectrally diverse region of California to address the following questions: (1) How much of the spectral dimensionality of hyperspectral data is captured in multispectral data?; (2) Is the characteristic pyramidal structure of the multispectral feature space also present in the low order dimensions of the hyperspectral feature space at comparable spatial scales?; (3) How much variability in rock and soil substrate endmembers (EMs) present in hyperspectral data is captured by multispectral sensors? We find nearly identical partitions of variance, low-order feature space topologies, and EM spectra for hyperspectral and multispectral image composites. The resulting feature spaces and EMs are also very similar to those from previous global multispectral analyses, implying that the fundamental structure of the global feature space is present in our relatively small spatial subset of California. Finally, we find that the multispectral dataset well represents the substrate EM variability present in the study area - despite its inability to resolve narrow band absorptions. We observe a tentative but consistent physical relationship between the gradation of substrate reflectance in the feature space and the gradation of sand versus clay content in the soil classification system.
Small, Christopher
2018-01-01
Planned hyperspectral satellite missions and the decreased revisit time of multispectral imaging offer the potential for data fusion to leverage both the spectral resolution of hyperspectral sensors and the temporal resolution of multispectral constellations. Hyperspectral imagery can also be used to better understand fundamental properties of multispectral data. In this analysis, we use five flight lines from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) archive with coincident Landsat 8 acquisitions over a spectrally diverse region of California to address the following questions: (1) How much of the spectral dimensionality of hyperspectral data is captured in multispectral data?; (2) Is the characteristic pyramidal structure of the multispectral feature space also present in the low order dimensions of the hyperspectral feature space at comparable spatial scales?; (3) How much variability in rock and soil substrate endmembers (EMs) present in hyperspectral data is captured by multispectral sensors? We find nearly identical partitions of variance, low-order feature space topologies, and EM spectra for hyperspectral and multispectral image composites. The resulting feature spaces and EMs are also very similar to those from previous global multispectral analyses, implying that the fundamental structure of the global feature space is present in our relatively small spatial subset of California. Finally, we find that the multispectral dataset well represents the substrate EM variability present in the study area – despite its inability to resolve narrow band absorptions. We observe a tentative but consistent physical relationship between the gradation of substrate reflectance in the feature space and the gradation of sand versus clay content in the soil classification system. PMID:29443900
The fabrication of a multi-spectral lens array and its application in assisting color blindness
NASA Astrophysics Data System (ADS)
Di, Si; Jin, Jian; Tang, Guanrong; Chen, Xianshuai; Du, Ruxu
2016-01-01
This article presents a compact multi-spectral lens array and describes its application in assisting color-blindness. The lens array consists of 9 microlens, and each microlens is coated with a different color filter. Thus, it can capture different light bands, including red, orange, yellow, green, cyan, blue, violet, near-infrared, and the entire visible band. First, the fabrication process is described in detail. Second, an imaging system is setup and a color blindness testing card is selected as the sample. By the system, the vision results of normal people and color blindness can be captured simultaneously. Based on the imaging results, it is possible to be used for helping color-blindness to recover normal vision.
Image correlation and sampling study
NASA Technical Reports Server (NTRS)
Popp, D. J.; Mccormack, D. S.; Sedwick, J. L.
1972-01-01
The development of analytical approaches for solving image correlation and image sampling of multispectral data is discussed. Relevant multispectral image statistics which are applicable to image correlation and sampling are identified. The general image statistics include intensity mean, variance, amplitude histogram, power spectral density function, and autocorrelation function. The translation problem associated with digital image registration and the analytical means for comparing commonly used correlation techniques are considered. General expressions for determining the reconstruction error for specific image sampling strategies are developed.
NASA Technical Reports Server (NTRS)
Barrett, Eamon B. (Editor); Pearson, James J. (Editor)
1989-01-01
Image understanding concepts and models, image understanding systems and applications, advanced digital processors and software tools, and advanced man-machine interfaces are among the topics discussed. Particular papers are presented on such topics as neural networks for computer vision, object-based segmentation and color recognition in multispectral images, the application of image algebra to image measurement and feature extraction, and the integration of modeling and graphics to create an infrared signal processing test bed.
COMPARISON OF RETINAL PATHOLOGY VISUALIZATION IN MULTISPECTRAL SCANNING LASER IMAGING.
Meshi, Amit; Lin, Tiezhu; Dans, Kunny; Chen, Kevin C; Amador, Manuel; Hasenstab, Kyle; Muftuoglu, Ilkay Kilic; Nudleman, Eric; Chao, Daniel; Bartsch, Dirk-Uwe; Freeman, William R
2018-03-16
To compare retinal pathology visualization in multispectral scanning laser ophthalmoscope imaging between the Spectralis and Optos devices. This retrospective cross-sectional study included 42 eyes from 30 patients with age-related macular degeneration (19 eyes), diabetic retinopathy (10 eyes), and epiretinal membrane (13 eyes). All patients underwent retinal imaging with a color fundus camera (broad-spectrum white light), the Spectralis HRA-2 system (3-color monochromatic lasers), and the Optos P200 system (2-color monochromatic lasers). The Optos image was cropped to a similar size as the Spectralis image. Seven masked graders marked retinal pathologies in each image within a 5 × 5 grid that included the macula. The average area with detected retinal pathology in all eyes was larger in the Spectralis images compared with Optos images (32.4% larger, P < 0.0001), mainly because of better visualization of epiretinal membrane and retinal hemorrhage. The average detection rate of age-related macular degeneration and diabetic retinopathy pathologies was similar across the three modalities, whereas epiretinal membrane detection rate was significantly higher in the Spectralis images. Spectralis tricolor multispectral scanning laser ophthalmoscope imaging had higher rate of pathology detection primarily because of better epiretinal membrane and retinal hemorrhage visualization compared with Optos bicolor multispectral scanning laser ophthalmoscope imaging.
Development and bench testing of a multi-spectral imaging technology built on a smartphone platform
NASA Astrophysics Data System (ADS)
Bolton, Frank J.; Weiser, Reuven; Kass, Alex J.; Rose, Donny; Safir, Amit; Levitz, David
2016-03-01
Cervical cancer screening presents a great challenge for clinicians across the developing world. In many countries, cervical cancer screening is done by visualization with the naked eye. Simple brightfield white light imaging with photo documentation has been shown to make a significant impact on cervical cancer care. Adoption of smartphone based cervical imaging devices is increasing across Africa. However, advanced imaging technologies such as multispectral imaging systems, are seldom deployed in low resource settings, where they are needed most. To address this challenge, the optical system of a smartphone-based mobile colposcopy imaging system was refined, integrating components required for low cost, portable multi-spectral imaging of the cervix. This paper describes the refinement of the mobile colposcope to enable it to acquire images of the cervix at multiple illumination wavelengths, including modeling and laboratory testing. Wavelengths were selected to enable quantifying the main absorbers in tissue (oxyand deoxy-hemoglobin, and water), as well as scattering parameters that describe the size distribution of scatterers. The necessary hardware and software modifications are reviewed. Initial testing suggests the multi-spectral mobile device holds promise for use in low-resource settings.
Photogrammetric Processing Using ZY-3 Satellite Imagery
NASA Astrophysics Data System (ADS)
Kornus, W.; Magariños, A.; Pla, M.; Soler, E.; Perez, F.
2015-03-01
This paper evaluates the stereoscopic capacities of the Chinese sensor ZiYuan-3 (ZY-3) for the generation of photogrammetric products. The satellite was launched on January 9, 2012 and carries three high-resolution panchromatic cameras viewing in forward (22º), nadir (0º) and backward direction (-22º) and an infrared multi-spectral scanner (IRMSS), which is slightly looking forward (6º). The ground sampling distance (GSD) is 2.1m for the nadir image, 3.5m for the two oblique stereo images and 5.8m for the multispectral image. The evaluated ZY-3 imagery consists of a full set of threefold-stereo and a multi-spectral image covering an area of ca. 50km x 50km north-west of Barcelona, Spain. The complete photogrammetric processing chain was executed including image orientation, the generation of a digital surface model (DSM), radiometric image correction, pansharpening, orthoimage generation and digital stereo plotting. All 4 images are oriented by estimating affine transformation parameters between observed and nominal RPC (rational polynomial coefficients) image positions of 17 ground control points (GCP) and a subsequent calculation of refined RPC. From 10 independent check points RMS errors of 2.2m, 2.0m and 2.7m in X, Y and H are obtained. Subsequently, a DSM of 5m grid spacing is generated fully automatically. A comparison with the Lidar data results in an overall DSM accuracy of approximately 3m. In moderate and flat terrain higher accuracies in the order of 2.5m and better are achieved. In a next step orthoimages from the high resolution nadir image and the multispectral image are generated using the refined RPC geometry and the DSM. After radiometric corrections a fused high resolution colour orthoimage with 2.1m pixel size is created using an adaptive HSL method. The pansharpen process is performed after the individual geocorrection due to the different viewing angles between the two images. In a detailed analysis of the colour orthoimage artifacts are detected covering an area of 4691ha, corresponding to less than 2% of the imaged area. Most of the artifacts are caused by clouds (4614ha). A minor part (77ha) is affected by colour patch, stripping or blooming effects. For the final qualitative analysis on the usability of the ZY-3 imagery for stereo plotting purposes stereo combinations of the nadir and an oblique image are discarded, mainly due to the different pixel size, which produces difficulties in the stereoscopic vision and poor accuracy in positioning and measuring. With the two oblique images a level of detail equivalent to 1:25.000 scale is achieved for transport network, hydrography, vegetation and elements to model the terrain as break lines. For settlement, including buildings and other constructions a lower level of detail is achieved equivalent to 1:50.000 scale.
A Multispectral Micro-Imager for Lunar Field Geology
NASA Technical Reports Server (NTRS)
Nunez, Jorge; Farmer, Jack; Sellar, Glenn; Allen, Carlton
2009-01-01
Field geologists routinely assign rocks to one of three basic petrogenetic categories (igneous, sedimentary or metamorphic) based on microtextural and mineralogical information acquired with a simple magnifying lens. Indeed, such observations often comprise the core of interpretations of geological processes and history. The Multispectral Microscopic Imager (MMI) uses multi-wavelength, light-emitting diodes (LEDs) and a substrate-removed InGaAs focal-plane array to create multispectral, microscale reflectance images of geological samples (FOV 32 X 40 mm). Each pixel (62.5 microns) of an image is comprised of 21 spectral bands that extend from 470 to 1750 nm, enabling the discrimination of a wide variety of rock-forming minerals, especially Fe-bearing phases. MMI images provide crucial context information for in situ robotic analyses using other onboard analytical instruments (e.g. XRD), or for the selection of return samples for analysis in terrestrial labs. To further assess the value of the MMI as a tool for lunar exploration, we used a field-portable, tripod-mounted version of the MMI to image a variety of Apollo samples housed at the Lunar Experiment Laboratory, NASA s Johnson Space Center. MMI images faithfully resolved the microtextural features of samples, while the application of ENVI-based spectral end member mapping methods revealed the distribution of Fe-bearing mineral phases (olivine, pyroxene and magnetite), along with plagioclase feldspars within samples. Samples included a broad range of lithologies and grain sizes. Our MMI-based petrogenetic interpretations compared favorably with thin section-based descriptions published in the Lunar Sample Compendium, revealing the value of MMI images for astronaut and rover-mediated lunar exploration.
Robust and adaptive band-to-band image transform of UAS miniature multi-lens multispectral camera
NASA Astrophysics Data System (ADS)
Jhan, Jyun-Ping; Rau, Jiann-Yeou; Haala, Norbert
2018-03-01
Utilizing miniature multispectral (MS) or hyperspectral (HS) cameras by mounting them on an Unmanned Aerial System (UAS) has the benefits of convenience and flexibility to collect remote sensing imagery for precision agriculture, vegetation monitoring, and environment investigation applications. Most miniature MS cameras adopt a multi-lens structure to record discrete MS bands of visible and invisible information. The differences in lens distortion, mounting positions, and viewing angles among lenses mean that the acquired original MS images have significant band misregistration errors. We have developed a Robust and Adaptive Band-to-Band Image Transform (RABBIT) method for dealing with the band co-registration of various types of miniature multi-lens multispectral cameras (Mini-MSCs) to obtain band co-registered MS imagery for remote sensing applications. The RABBIT utilizes modified projective transformation (MPT) to transfer the multiple image geometry of a multi-lens imaging system to one sensor geometry, and combines this with a robust and adaptive correction (RAC) procedure to correct several systematic errors and to obtain sub-pixel accuracy. This study applies three state-of-the-art Mini-MSCs to evaluate the RABBIT method's performance, specifically the Tetracam Miniature Multiple Camera Array (MiniMCA), Micasense RedEdge, and Parrot Sequoia. Six MS datasets acquired at different target distances and dates, and locations are also applied to prove its reliability and applicability. Results prove that RABBIT is feasible for different types of Mini-MSCs with accurate, robust, and rapid image processing efficiency.
USDA-ARS?s Scientific Manuscript database
Structured-illumination reflectance imaging (SIRI) is a new, promising imaging modality for enhancing quality detection of food. A liquid-crystal tunable filter (LCTF)-based multispectral SIRI system was developed and used for selecting optimal wavebands to detect bruising in apples. Immediately aft...
Cloud-based processing of multi-spectral imaging data
NASA Astrophysics Data System (ADS)
Bernat, Amir S.; Bolton, Frank J.; Weiser, Reuven; Levitz, David
2017-03-01
Multispectral imaging holds great promise as a non-contact tool for the assessment of tissue composition. Performing multi - spectral imaging on a hand held mobile device would allow to bring this technology and with it knowledge to low resource settings to provide a state of the art classification of tissue health. This modality however produces considerably larger data sets than white light imaging and requires preliminary image analysis for it to be used. The data then needs to be analyzed and logged, while not requiring too much of the system resource or a long computation time and battery use by the end point device. Cloud environments were designed to allow offloading of those problems by allowing end point devices (smartphones) to offload computationally hard tasks. For this end we present a method where the a hand held device based around a smartphone captures a multi - spectral dataset in a movie file format (mp4) and compare it to other image format in size, noise and correctness. We present the cloud configuration used for segmenting images to frames where they can later be used for further analysis.
Lattice algebra approach to multispectral analysis of ancient documents.
Valdiviezo-N, Juan C; Urcid, Gonzalo
2013-02-01
This paper introduces a lattice algebra procedure that can be used for the multispectral analysis of historical documents and artworks. Assuming the presence of linearly mixed spectral pixels captured in a multispectral scene, the proposed method computes the scaled min- and max-lattice associative memories to determine the purest pixels that best represent the spectra of single pigments. The estimation of fractional proportions of pure spectra at each image pixel is used to build pigment abundance maps that can be used for subsequent restoration of damaged parts. Application examples include multispectral images acquired from the Archimedes Palimpsest and a Mexican pre-Hispanic codex.
The NEAR Multispectral Imager.
NASA Astrophysics Data System (ADS)
Hawkins, S. E., III
1998-06-01
Multispectral Imager, one of the primary instruments on the Near Earth Asteroid Rendezvous (NEAR) spacecraft, uses a five-element refractive optics telescope, an eight-position filter wheel, and a charge-coupled device detector to acquire images over its sensitive wavelength range of ≍400 - 1100 nm. The primary science objectives of the Multispectral Imager are to determine the morphology and composition of the surface of asteroid 433 Eros. The camera will have a critical role in navigating to the asteroid. Seven narrowband spectral filters have been selected to provide multicolor imaging for comparative studies with previous observations of asteroids in the same class as Eros. The eighth filter is broadband and will be used for optical navigation. An overview of the instrument is presented, and design parameters and tradeoffs are discussed.
An improved feature extraction algorithm based on KAZE for multi-spectral image
NASA Astrophysics Data System (ADS)
Yang, Jianping; Li, Jun
2018-02-01
Multi-spectral image contains abundant spectral information, which is widely used in all fields like resource exploration, meteorological observation and modern military. Image preprocessing, such as image feature extraction and matching, is indispensable while dealing with multi-spectral remote sensing image. Although the feature matching algorithm based on linear scale such as SIFT and SURF performs strong on robustness, the local accuracy cannot be guaranteed. Therefore, this paper proposes an improved KAZE algorithm, which is based on nonlinear scale, to raise the number of feature and to enhance the matching rate by using the adjusted-cosine vector. The experiment result shows that the number of feature and the matching rate of the improved KAZE are remarkably than the original KAZE algorithm.
A New Pansharpening Method Based on Spatial and Spectral Sparsity Priors.
He, Xiyan; Condat, Laurent; Bioucas-Diaz, Jose; Chanussot, Jocelyn; Xia, Junshi
2014-06-27
The development of multisensor systems in recent years has led to great increase in the amount of available remote sensing data. Image fusion techniques aim at inferring high quality images of a given area from degraded versions of the same area obtained by multiple sensors. This paper focuses on pansharpening, which is the inference of a high spatial resolution multispectral image from two degraded versions with complementary spectral and spatial resolution characteristics: a) a low spatial resolution multispectral image; and b) a high spatial resolution panchromatic image. We introduce a new variational model based on spatial and spectral sparsity priors for the fusion. In the spectral domain we encourage low-rank structure, whereas in the spatial domain we promote sparsity on the local differences. Given the fact that both panchromatic and multispectral images are integrations of the underlying continuous spectra using different channel responses, we propose to exploit appropriate regularizations based on both spatial and spectral links between panchromatic and the fused multispectral images. A weighted version of the vector Total Variation (TV) norm of the data matrix is employed to align the spatial information of the fused image with that of the panchromatic image. With regard to spectral information, two different types of regularization are proposed to promote a soft constraint on the linear dependence between the panchromatic and the fused multispectral images. The first one estimates directly the linear coefficients from the observed panchromatic and low resolution multispectral images by Linear Regression (LR) while the second one employs the Principal Component Pursuit (PCP) to obtain a robust recovery of the underlying low-rank structure. We also show that the two regularizers are strongly related. The basic idea of both regularizers is that the fused image should have low-rank and preserve edge locations. We use a variation of the recently proposed Split Augmented Lagrangian Shrinkage (SALSA) algorithm to effectively solve the proposed variational formulations. Experimental results on simulated and real remote sensing images show the effectiveness of the proposed pansharpening method compared to the state-of-the-art.
Multispectral image dissector camera flight test
NASA Technical Reports Server (NTRS)
Johnson, B. L.
1973-01-01
It was demonstrated that the multispectral image dissector camera is able to provide composite pictures of the earth surface from high altitude overflights. An electronic deflection feature was used to inject the gyro error signal into the camera for correction of aircraft motion.
NASA Astrophysics Data System (ADS)
Romano, Renan A.; Pratavieira, Sebastião.; da Silva, Ana P.; Kurachi, Cristina; Guimarães, Francisco E. G.
2017-07-01
This study clearly demonstrates that multispectral confocal microscopy images analyzed by artificial neural networks provides a powerful tool to real-time monitoring photosensitizer uptake, as well as photochemical transformations occurred.
Diagnosing hypoxia in murine models of rheumatoid arthritis from reflectance multispectral images
NASA Astrophysics Data System (ADS)
Glinton, Sophie; Naylor, Amy J.; Claridge, Ela
2017-07-01
Spectra computed from multispectral images of murine models of Rheumatoid Arthritis show a characteristic decrease in reflectance within the 600-800nm region which is indicative of the reduction in blood oxygenation and is consistent with hypoxia.
The application of UV multispectral technology in extract trace evdidence
NASA Astrophysics Data System (ADS)
Guo, Jingjing; Xu, Xiaojing; Li, Zhihui; Xu, Lei; Xie, Lanchi
2015-11-01
Multispectral imaging is becoming more and more important in the field of examination of material evidence, especially the ultraviolet spectral imaging. Fingerprints development, questioned document detection, trace evidence examination-all can used of it. This paper introduce a UV multispectral equipment which was developed by BITU & IFSC, it can extract trace evidence-extract fingerprints. The result showed that this technology can develop latent sweat-sebum mixed fingerprint on photo and ID card blood fingerprint on steel hold. We used the UV spectrum data analysis system to make the UV spectral image clear to identify and analyse.
Liu, Changhong; Liu, Wei; Lu, Xuzhong; Ma, Fei; Chen, Wei; Yang, Jianbo; Zheng, Lei
2014-01-01
Multispectral imaging with 19 wavelengths in the range of 405-970 nm has been evaluated for nondestructive determination of firmness, total soluble solids (TSS) content and ripeness stage in strawberry fruit. Several analysis approaches, including partial least squares (PLS), support vector machine (SVM) and back propagation neural network (BPNN), were applied to develop theoretical models for predicting the firmness and TSS of intact strawberry fruit. Compared with PLS and SVM, BPNN considerably improved the performance of multispectral imaging for predicting firmness and total soluble solids content with the correlation coefficient (r) of 0.94 and 0.83, SEP of 0.375 and 0.573, and bias of 0.035 and 0.056, respectively. Subsequently, the ability of multispectral imaging technology to classify fruit based on ripeness stage was tested using SVM and principal component analysis-back propagation neural network (PCA-BPNN) models. The higher classification accuracy of 100% was achieved using SVM model. Moreover, the results of all these models demonstrated that the VIS parts of the spectra were the main contributor to the determination of firmness, TSS content estimation and classification of ripeness stage in strawberry fruit. These results suggest that multispectral imaging, together with suitable analysis model, is a promising technology for rapid estimation of quality attributes and classification of ripeness stage in strawberry fruit.
Multispectral photography for earth resources
NASA Technical Reports Server (NTRS)
Wenderoth, S.; Yost, E.; Kalia, R.; Anderson, R.
1972-01-01
A guide for producing accurate multispectral results for earth resource applications is presented along with theoretical and analytical concepts of color and multispectral photography. Topics discussed include: capabilities and limitations of color and color infrared films; image color measurements; methods of relating ground phenomena to film density and color measurement; sensitometry; considerations in the selection of multispectral cameras and components; and mission planning.
Dabo-Niang, S; Zoueu, J T
2012-09-01
In this communication, we demonstrate how kriging, combine with multispectral and multimodal microscopy can enhance the resolution of malaria-infected images and provide more details on their composition, for analysis and diagnosis. The results of this interpolation applied to the two principal components of multispectral and multimodal images illustrate that the examination of the content of Plasmodium falciparum infected human erythrocyte is improved. © 2012 The Authors Journal of Microscopy © 2012 Royal Microscopical Society.
NASA Technical Reports Server (NTRS)
Albrizzio, C.
1974-01-01
A methodology was developed to evaluate multispectral analysis of orbital imagery on the interpretation of geology, coastal geomorphology and sedimentary processes. The images analyzed were obtained during the pass of ERTS satellite over the center region of Venezuela on October 19, 1972. ERTS-1 multispectral images in black and white paper copies and transparencies of the 4 bands and false color composites at scales of 1:1,000,000 and 1:500,000 were interpreted. Lithology and outcrop patterns of the following geological formations have been interpreted: igneous and metamorphic basement of Cocodite and Santa Ana, Jurassic-Cretaceous metamorphics of Pueblo Nuevo, Cantaure Miocene-Pliocene sediments, and Quaternary alluvium, dunes, beach ridges, bars and reefs. A prominent and extensive Paraguana tonal anomaly shaped as an 8 has been discovered at the NW of the Peninsula. Its erosional origin has exposed light toned lower beds at the center, with additional evidence of topographic depression and development of underground drainage of karst origin. Coastal geomorphology, its processes and energy has been interpreted with the help of wind direction analysis (ENE-WSW) at sea level through the orientation of transported materials (water vapor, water and sediments) by clouds, waves, sea current, plumes of suspended sediments associated to river outlets, dunes, sediment sources and shore-line orientation.
NASA Technical Reports Server (NTRS)
Oswald, Hayden; Molthan, Andrew L.
2011-01-01
Satellite remote sensing has gained widespread use in the field of operational meteorology. Although raw satellite imagery is useful, several techniques exist which can convey multiple types of data in a more efficient way. One of these techniques is multispectral compositing. The NASA Short-term Prediction Research and Transition (SPoRT) Center has developed two multispectral satellite imagery products which utilize data from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard NASA's Terra and Aqua satellites, based upon products currently generated and used by the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT). The nighttime microphysics product allows users to identify clouds occurring at different altitudes, but emphasizes fog and low cloud detection. This product improves upon current spectral difference and single channel infrared techniques. Each of the current products has its own set of advantages for nocturnal fog detection, but each also has limiting drawbacks which can hamper the analysis process. The multispectral product combines each current product with a third channel difference. Since the final image is enhanced with color, it simplifies the fog identification process. Analysis has shown that the nighttime microphysics imagery product represents a substantial improvement to conventional fog detection techniques, as well as provides a preview of future satellite capabilities to forecasters.
NASA Technical Reports Server (NTRS)
Harston, Craig; Schumacher, Chris
1992-01-01
Automated schemes are needed to classify multispectral remotely sensed data. Human intelligence is often required to correctly interpret images from satellites and aircraft. Humans suceed because they use various types of cues about a scene to accurately define the contents of the image. Consequently, it follows that computer techniques that integrate and use different types of information would perform better than single source approaches. This research illustrated that multispectral signatures and topographical information could be used in concert. Significantly, this dual source tactic classified a remotely sensed image better than the multispectral classification alone. These classifications were accomplished by fusing spectral signatures with topographical information using neural network technology. A neural network was trained to classify Landsat mulitspectral signatures. A file of georeferenced ground truth classifications were used as the training criterion. The network was trained to classify urban, agriculture, range, and forest with an accuracy of 65.7 percent. Another neural network was programmed and trained to fuse these multispectral signature results with a file of georeferenced altitude data. This topological file contained 10 levels of elevations. When this nonspectral elevation information was fused with the spectral signatures, the classifications were improved to 73.7 and 75.7 percent.
Target Detection over the Diurnal Cycle Using a Multispectral Infrared Sensor.
Zhao, Huijie; Ji, Zheng; Li, Na; Gu, Jianrong; Li, Yansong
2016-12-29
When detecting a target over the diurnal cycle, a conventional infrared thermal sensor might lose the target due to the thermal crossover, which could happen at any time throughout the day when the infrared image contrast between target and background in a scene is indistinguishable due to the temperature variation. In this paper, the benefits of using a multispectral-based infrared sensor over the diurnal cycle have been shown. Firstly, a brief theoretical analysis on how the thermal crossover influences a conventional thermal sensor, within the conditions where the thermal crossover would happen and why the mid-infrared (3~5 μm) multispectral technology is effective, is presented. Furthermore, the effectiveness of this technology is also described and we describe how the prototype design and multispectral technology is employed to help solve the thermal crossover detection problem. Thirdly, several targets are set up outside and imaged in the field experiment over a 24-h period. The experimental results show that the multispectral infrared imaging system can enhance the contrast of the detected images and effectively solve the failure of the conventional infrared sensor during the diurnal cycle, which is of great significance for infrared surveillance applications.
Target Detection over the Diurnal Cycle Using a Multispectral Infrared Sensor
Zhao, Huijie; Ji, Zheng; Li, Na; Gu, Jianrong; Li, Yansong
2016-01-01
When detecting a target over the diurnal cycle, a conventional infrared thermal sensor might lose the target due to the thermal crossover, which could happen at any time throughout the day when the infrared image contrast between target and background in a scene is indistinguishable due to the temperature variation. In this paper, the benefits of using a multispectral-based infrared sensor over the diurnal cycle have been shown. Firstly, a brief theoretical analysis on how the thermal crossover influences a conventional thermal sensor, within the conditions where the thermal crossover would happen and why the mid-infrared (3~5 μm) multispectral technology is effective, is presented. Furthermore, the effectiveness of this technology is also described and we describe how the prototype design and multispectral technology is employed to help solve the thermal crossover detection problem. Thirdly, several targets are set up outside and imaged in the field experiment over a 24-h period. The experimental results show that the multispectral infrared imaging system can enhance the contrast of the detected images and effectively solve the failure of the conventional infrared sensor during the diurnal cycle, which is of great significance for infrared surveillance applications. PMID:28036073
The precision-processing subsystem for the Earth Resources Technology Satellite.
NASA Technical Reports Server (NTRS)
Chapelle, W. E.; Bybee, J. E.; Bedross, G. M.
1972-01-01
Description of the precision processor, a subsystem in the image-processing system for the Earth Resources Technology Satellite (ERTS). This processor is a special-purpose image-measurement and printing system, designed to process user-selected bulk images to produce 1:1,000,000-scale film outputs and digital image data, presented in a Universal-Transverse-Mercator (UTM) projection. The system will remove geometric and radiometric errors introduced by the ERTS multispectral sensors and by the bulk-processor electron-beam recorder. The geometric transformations required for each input scene are determined by resection computations based on reseau measurements and image comparisons with a special ground-control base contained within the system; the images are then printed and digitized by electronic image-transfer techniques.
Multi-spectral imaging with infrared sensitive organic light emitting diode
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-01-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions. PMID:25091589
Multi-spectral imaging with infrared sensitive organic light emitting diode
NASA Astrophysics Data System (ADS)
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-08-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions.
Multispectral imaging of plant stress for detection of CO2 leaking from underground
NASA Astrophysics Data System (ADS)
Rouse, J.; Shaw, J. A.; Repasky, K. S.; Lawrence, R. L.
2008-12-01
Multispectral imaging of plant stress is a potentially useful method of detecting CO2 leaking from underground. During the summers of 2007 and 2008, we deployed a multispectral imager for vegetation sensing as part of an underground CO2 release experiment conducted at the Zero Emission Research and Technology (ZERT) field site near the Montana State University campus in Bozeman, Montana. The imager was mounted on a low tower and observed the vegetation in a region near an underground pipe during a multi-week CO2 release. The imager was calibrated to measure absolute reflectance, from which vegetation indices were calculated as a measure of vegetation health. The temporal evolution of these indices over the course of the experiment show that the vegetation nearest the pipe exhibited more stress than the vegetation located further from the pipe. The imager observed notably increased stress in vegetation at locations exhibiting particularly high flux of CO2 from the ground into the atmosphere. These data from the 2007 and 2008 experiments will be used to demonstrate the utility of a tower-mounted multispectral imaging system for detecting CO2 leakage from below ground with the ability to operate continuously during clear and cloudy conditions.
Geometric Calibration and Radiometric Correction of the Maia Multispectral Camera
NASA Astrophysics Data System (ADS)
Nocerino, E.; Dubbini, M.; Menna, F.; Remondino, F.; Gattelli, M.; Covi, D.
2017-10-01
Multispectral imaging is a widely used remote sensing technique, whose applications range from agriculture to environmental monitoring, from food quality check to cultural heritage diagnostic. A variety of multispectral imaging sensors are available on the market, many of them designed to be mounted on different platform, especially small drones. This work focuses on the geometric and radiometric characterization of a brand-new, lightweight, low-cost multispectral camera, called MAIA. The MAIA camera is equipped with nine sensors, allowing for the acquisition of images in the visible and near infrared parts of the electromagnetic spectrum. Two versions are available, characterised by different set of band-pass filters, inspired by the sensors mounted on the WorlView-2 and Sentinel2 satellites, respectively. The camera details and the developed procedures for the geometric calibrations and radiometric correction are presented in the paper.
Tissues segmentation based on multi spectral medical images
NASA Astrophysics Data System (ADS)
Li, Ya; Wang, Ying
2017-11-01
Each band image contains the most obvious tissue feature according to the optical characteristics of different tissues in different specific bands for multispectral medical images. In this paper, the tissues were segmented by their spectral information at each multispectral medical images. Four Local Binary Patter descriptors were constructed to extract blood vessels based on the gray difference between the blood vessels and their neighbors. The segmented tissue in each band image was merged to a clear image.
Resolution Enhancement of Hyperion Hyperspectral Data using Ikonos Multispectral Data
2007-09-01
spatial - resolution hyperspectral image to produce a sharpened product. The result is a product that has the spectral properties of the ...multispectral sensors. In this work, we examine the benefits of combining data from high- spatial - resolution , low- spectral - resolution spectral imaging...sensors with data obtained from high- spectral - resolution , low- spatial - resolution spectral imaging sensors.
NASA Technical Reports Server (NTRS)
1998-01-01
Under a Jet Propulsion Laboratory SBIR (Small Business Innovative Research), Cambridge Research and Instrumentation Inc., developed a new class of filters for the construction of small, low-cost multispectral imagers. The VariSpec liquid crystal enables users to obtain multi-spectral, ultra-high resolution images using a monochrome CCD (charge coupled device) camera. Application areas include biomedical imaging, remote sensing, and machine vision.
Blind source separation of ex-vivo aorta tissue multispectral images
Galeano, July; Perez, Sandra; Montoya, Yonatan; Botina, Deivid; Garzón, Johnson
2015-01-01
Blind Source Separation methods (BSS) aim for the decomposition of a given signal in its main components or source signals. Those techniques have been widely used in the literature for the analysis of biomedical images, in order to extract the main components of an organ or tissue under study. The analysis of skin images for the extraction of melanin and hemoglobin is an example of the use of BSS. This paper presents a proof of concept of the use of source separation of ex-vivo aorta tissue multispectral Images. The images are acquired with an interference filter-based imaging system. The images are processed by means of two algorithms: Independent Components analysis and Non-negative Matrix Factorization. In both cases, it is possible to obtain maps that quantify the concentration of the main chromophores present in aortic tissue. Also, the algorithms allow for spectral absorbance of the main tissue components. Those spectral signatures were compared against the theoretical ones by using correlation coefficients. Those coefficients report values close to 0.9, which is a good estimator of the method’s performance. Also, correlation coefficients lead to the identification of the concentration maps according to the evaluated chromophore. The results suggest that Multi/hyper-spectral systems together with image processing techniques is a potential tool for the analysis of cardiovascular tissue. PMID:26137366
NASA Technical Reports Server (NTRS)
Ebert, D. H.; Eppes, T. A.; Thomas, D. J.
1973-01-01
The impact of a conical scan versus a linear scan multispectral scanner (MSS) instrument was studied in terms of: (1) design modifications required in framing and continuous image recording devices; and (2) changes in configurations of an all-digital precision image processor. A baseline system was defined to provide the framework for comparison, and included pertinent spacecraft parameters, a conical MSS, a linear MSS, an image recording system, and an all-digital precision processor. Lateral offset pointing of the sensors over a range of plus or minus 20 deg was considered. The study addressed the conical scan impact on geometric, radiometric, and aperture correction of MSS data in terms of hardware and software considerations, system complexity, quality of corrections, throughput, and cost of implementation. It was concluded that: (1) if the MSS data are to be only film recorded, then there is only a nomial concial scan impact on the ground data processing system; and (2) if digital data are to be provided to users on computer compatible tapes in rectilinear format, then there is a significant conical scan impact on the ground data processing system.
Color standardization and optimization in whole slide imaging.
Yagi, Yukako
2011-03-30
Standardization and validation of the color displayed by digital slides is an important aspect of digital pathology implementation. While the most common reason for color variation is the variance in the protocols and practices in the histology lab, the color displayed can also be affected by variation in capture parameters (for example, illumination and filters), image processing and display factors in the digital systems themselves. We have been developing techniques for color validation and optimization along two paths. The first was based on two standard slides that are scanned and displayed by the imaging system in question. In this approach, one slide is embedded with nine filters with colors selected especially for H&E stained slides (looking like tiny Macbeth color chart); the specific color of the nine filters were determined in our previous study and modified for whole slide imaging (WSI). The other slide is an H&E stained mouse embryo. Both of these slides were scanned and the displayed images were compared to a standard. The second approach was based on our previous multispectral imaging research. As a first step, the two slide method (above) was used to identify inaccurate display of color and its cause, and to understand the importance of accurate color in digital pathology. We have also improved the multispectral-based algorithm for more consistent results in stain standardization. In near future, the results of the two slide and multispectral techniques can be combined and will be widely available. We have been conducting a series of researches and developing projects to improve image quality to establish Image Quality Standardization. This paper discusses one of most important aspects of image quality - color.
Zhao, Yong-guang; Ma, Ling-ling; Li, Chuan-rong; Zhu, Xiao-hua; Tang, Ling-li
2015-07-01
Due to the lack of enough spectral bands for multi-spectral sensor, it is difficult to reconstruct surface retlectance spectrum from finite spectral information acquired by multi-spectral instrument. Here, taking into full account of the heterogeneity of pixel from remote sensing image, a method is proposed to simulate hyperspectral data from multispectral data based on canopy radiation transfer model. This method first assumes the mixed pixels contain two types of land cover, i.e., vegetation and soil. The sensitive parameters of Soil-Leaf-Canopy (SLC) model and a soil ratio factor were retrieved from multi-spectral data based on Look-Up Table (LUT) technology. Then, by combined with a soil ratio factor, all the parameters were input into the SLC model to simulate the surface reflectance spectrum from 400 to 2 400 nm. Taking Landsat Enhanced Thematic Mapper Plus (ETM+) image as reference image, the surface reflectance spectrum was simulated. The simulated reflectance spectrum revealed different feature information of different surface types. To test the performance of this method, the simulated reflectance spectrum was convolved with the Landsat ETM + spectral response curves and Moderate Resolution Imaging Spectrometer (MODIS) spectral response curves to obtain the simulated Landsat ETM+ and MODIS image. Finally, the simulated Landsat ETM+ and MODIS images were compared with the observed Landsat ETM+ and MODIS images. The results generally showed high correction coefficients (Landsat: 0.90-0.99, MODIS: 0.74-0.85) between most simulated bands and observed bands and indicated that the simulated reflectance spectrum was well simulated and reliable.
NASA Astrophysics Data System (ADS)
Renaud, Rémi; Bendahmane, Mounir; Chery, Romain; Martin, Claire; Gurden, Hirac; Pain, Frederic
2012-06-01
Wide field multispectral imaging of light backscattered by brain tissues provides maps of hemodynamics changes (total blood volume and oxygenation) following activation. This technique relies on the fit of the reflectance images obtain at two or more wavelengths using a modified Beer-Lambert law1,2. It has been successfully applied to study the activation of several sensory cortices in the anesthetized rodent using visible light1-5. We have carried out recently the first multispectral imaging in the olfactory bulb6 (OB) of anesthetized rats. However, the optimization of wavelengths choice has not been discussed in terms of cross talk and uniqueness of the estimated parameters (blood volume and saturation maps) although this point was shown to be crucial for similar studies in Diffuse Optical Imaging in humans7-10. We have studied theoretically and experimentally the optimal sets of wavelength for multispectral imaging of rodent brain activation in the visible. Sets of optimal wavelengths have been identified and validated in vivo for multispectral imaging of the OB of rats following odor stimulus. We studied the influence of the wavelengths sets on the magnitude and time courses of the oxy- and deoxyhemoglobin concentration variations as well as on the spatial extent of activated brain areas following stimulation. Beyond the estimation of hemodynamic parameters from multispectral reflectance data, we observed repeatedly and for all wavelengths a decrease of light reflectance. For wavelengths longer than 590 nm, these observations differ from those observed in the somatosensory and barrel cortex and question the basis of the reflectance changes during activation in the OB. To solve this issue, Monte Carlo simulations (MCS) have been carried out to assess the relative contribution of absorption, scattering and anisotropy changes to the intrinsic optical imaging signals in somatosensory cortex (SsC) and OB model.
Multi-spectral confocal microendoscope for in-vivo imaging
NASA Astrophysics Data System (ADS)
Rouse, Andrew Robert
The concept of in-vivo multi-spectral confocal microscopy is introduced. A slit-scanning multi-spectral confocal microendoscope (MCME) was built to demonstrate the technique. The MCME employs a flexible fiber-optic catheter coupled to a custom built slit-scan confocal microscope fitted with a custom built imaging spectrometer. The catheter consists of a fiber-optic imaging bundle linked to a miniature objective and focus assembly. The design and performance of the miniature objective and focus assembly are discussed. The 3mm diameter catheter may be used on its own or routed though the instrument channel of a commercial endoscope. The confocal nature of the system provides optical sectioning with 3mum lateral resolution and 30mum axial resolution. The prism based multi-spectral detection assembly is typically configured to collect 30 spectral samples over the visible chromatic range. The spectral sampling rate varies from 4nm/pixel at 490nm to 8nm/pixel at 660nm and the minimum resolvable wavelength difference varies from 7nm to 18nm over the same spectral range. Each of these characteristics are primarily dictated by the dispersive power of the prism. The MCME is designed to examine cellular structures during optical biopsy and to exploit the diagnostic information contained within the spectral domain. The primary applications for the system include diagnosis of disease in the gastro-intestinal tract and female reproductive system. Recent data from the grayscale imaging mode are presented. Preliminary multi-spectral results from phantoms, cell cultures, and excised human tissue are presented to demonstrate the potential of in-vivo multi-spectral imaging.
NASA Astrophysics Data System (ADS)
Yu, Shanshan; Murakami, Yuri; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki
2006-09-01
The article proposes a multispectral image compression scheme using nonlinear spectral transform for better colorimetric and spectral reproducibility. In the method, we show the reduction of colorimetric error under a defined viewing illuminant and also that spectral accuracy can be improved simultaneously using a nonlinear spectral transform called Labplus, which takes into account the nonlinearity of human color vision. Moreover, we show that the addition of diagonal matrices to Labplus can further preserve the spectral accuracy and has a generalized effect of improving the colorimetric accuracy under other viewing illuminants than the defined one. Finally, we discuss the usage of the first-order Markov model to form the analysis vectors for the higher order channels in Labplus to reduce the computational complexity. We implement a multispectral image compression system that integrates Labplus with JPEG2000 for high colorimetric and spectral reproducibility. Experimental results for a 16-band multispectral image show the effectiveness of the proposed scheme.
Design and development of an airborne multispectral imaging system
NASA Astrophysics Data System (ADS)
Kulkarni, Rahul R.; Bachnak, Rafic; Lyle, Stacey; Steidley, Carl W.
2002-08-01
Advances in imaging technology and sensors have made airborne remote sensing systems viable for many applications that require reasonably good resolution at low cost. Digital cameras are making their mark on the market by providing high resolution at very high rates. This paper describes an aircraft-mounted imaging system (AMIS) that is being designed and developed at Texas A&M University-Corpus Christi (A&M-CC) with the support of a grant from NASA. The approach is to first develop and test a one-camera system that will be upgraded into a five-camera system that offers multi-spectral capabilities. AMIS will be low cost, rugged, portable and has its own battery power source. Its immediate use will be to acquire images of the Coastal area in the Gulf of Mexico for a variety of studies covering vast spectra from near ultraviolet region to near infrared region. This paper describes AMIS and its characteristics, discusses the process for selecting the major components, and presents the progress.
Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; ...
2014-12-09
We present results from an ongoing effort to extend neuromimetic machine vision algorithms to multispectral data using adaptive signal processing combined with compressive sensing and machine learning techniques. Our goal is to develop a robust classification methodology that will allow for automated discretization of the landscape into distinct units based on attributes such as vegetation, surface hydrological properties, and topographic/geomorphic characteristics. We use a Hebbian learning rule to build spectral-textural dictionaries that are tailored for classification. We learn our dictionaries from millions of overlapping multispectral image patches and then use a pursuit search to generate classification features. Land cover labelsmore » are automatically generated using unsupervised clustering of sparse approximations (CoSA). We demonstrate our method on multispectral WorldView-2 data from a coastal plain ecosystem in Barrow, Alaska. We explore learning from both raw multispectral imagery and normalized band difference indices. We explore a quantitative metric to evaluate the spectral properties of the clusters in order to potentially aid in assigning land cover categories to the cluster labels. In this study, our results suggest CoSA is a promising approach to unsupervised land cover classification in high-resolution satellite imagery.« less
NASA Astrophysics Data System (ADS)
Mehl, Patrick M.; Chao, Kevin; Kim, Moon S.; Chen, Yud-Ren
2001-03-01
Presence of natural or exogenous contaminations on apple cultivars is a food safety and quality concern touching the general public and strongly affecting this commodity market. Accumulations of human pathogens are usually observed on surface lesions of commodities. Detections of either lesions or directly of the pathogens are essential for assuring the quality and safety of commodities. We are presenting the application of hyperspectral image analysis towards the development of multispectral techniques for the detection of defects on chosen apple cultivars, such as Golden Delicious, Red Delicious, and Gala apples. Separate apple cultivars possess different spectral characteristics leading to different approaches for analysis. General preprocessing analysis with morphological treatments is followed by different image treatments and condition analysis for highlighting lesions and contaminations on the apple cultivars. Good isolations of scabs, fungal and soil contaminations and bruises are observed with hyperspectral imaging processing either using principal component analysis or utilizing the chlorophyll absorption peak. Applications of hyperspectral results to a multispectral detection are limited by the spectral capabilities of our RGB camera using either specific band pass filters and using direct neutral filters. Good separations of defects are obtained for Golden Delicious apples. It is however limited for the other cultivars. Having an extra near infrared channel will increase the detection level utilizing the chlorophyll absorption band for detection as demonstrated by the present hyperspectral imaging analysis
Pansharpening Techniques to Detect Mass Monument Damaging in Iraq
NASA Astrophysics Data System (ADS)
Baiocchi, V.; Bianchi, A.; Maddaluno, C.; Vidale, M.
2017-05-01
The recent mass destructions of monuments in Iraq cannot be monitored with the terrestrial survey methodologies, for obvious reasons of safety. For the same reasons, it's not advisable the use of classical aerial photogrammetry, so it was obvious to think to the use of multispectral Very High Resolution (VHR) satellite imagery. Nowadays VHR satellite images resolutions are very near airborne photogrammetrical images and usually they are acquired in multispectral mode. The combination of the various bands of the images is called pan-sharpening and it can be carried on using different algorithms and strategies. The correct pansharpening methodology, for a specific image, must be chosen considering the specific multispectral characteristics of the satellite used and the particular application. In this paper a first definition of guidelines for the use of VHR multispectral imagery to detect monument destruction in unsafe area, is reported. The proposed methodology, agreed with UNESCO and soon to be used in Libya for the coastal area, has produced a first report delivered to the Iraqi authorities. Some of the most evident examples are reported to show the possible capabilities of identification of damages using VHR images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, Daniela Irina
An approach for land cover classification, seasonal and yearly change detection and monitoring, and identification of changes in man-made features may use a clustering of sparse approximations (CoSA) on sparse representations in learned dictionaries. A Hebbian learning rule may be used to build multispectral or hyperspectral, multiresolution dictionaries that are adapted to regional satellite image data. Sparse image representations of pixel patches over the learned dictionaries may be used to perform unsupervised k-means clustering into land cover categories. The clustering process behaves as a classifier in detecting real variability. This approach may combine spectral and spatial textural characteristics to detectmore » geologic, vegetative, hydrologic, and man-made features, as well as changes in these features over time.« less
Optical mesoscopy without the scatter: broadband multispectral optoacoustic mesoscopy
Chekkoury, Andrei; Gateau, Jérôme; Driessen, Wouter; Symvoulidis, Panagiotis; Bézière, Nicolas; Feuchtinger, Annette; Walch, Axel; Ntziachristos, Vasilis
2015-01-01
Optical mesoscopy extends the capabilities of biological visualization beyond the limited penetration depth achieved by microscopy. However, imaging of opaque organisms or tissues larger than a few hundred micrometers requires invasive tissue sectioning or chemical treatment of the specimen for clearing photon scattering, an invasive process that is regardless limited with depth. We developed previously unreported broadband optoacoustic mesoscopy as a tomographic modality to enable imaging of optical contrast through several millimeters of tissue, without the need for chemical treatment of tissues. We show that the unique combination of three-dimensional projections over a broad 500 kHz–40 MHz frequency range combined with multi-wavelength illumination is necessary to render broadband multispectral optoacoustic mesoscopy (2B-MSOM) superior to previous optical or optoacoustic mesoscopy implementations. PMID:26417486
Mitigating fluorescence spectral overlap in wide-field endoscopic imaging
Hou, Vivian; Nelson, Leonard Y.; Seibel, Eric J.
2013-01-01
Abstract. The number of molecular species suitable for multispectral fluorescence imaging is limited due to the overlap of the emission spectra of indicator fluorophores, e.g., dyes and nanoparticles. To remove fluorophore emission cross-talk in wide-field multispectral fluorescence molecular imaging, we evaluate three different solutions: (1) image stitching, (2) concurrent imaging with cross-talk ratio subtraction algorithm, and (3) frame-sequential imaging. A phantom with fluorophore emission cross-talk is fabricated, and a 1.2-mm ultrathin scanning fiber endoscope (SFE) is used to test and compare these approaches. Results show that fluorophore emission cross-talk could be successfully avoided or significantly reduced. Near term, the concurrent imaging method of wide-field multispectral fluorescence SFE is viable for early stage cancer detection and localization in vivo. Furthermore, a means to enhance exogenous fluorescence target-to-background ratio by the reduction of tissue autofluorescence background is demonstrated. PMID:23966226
NASA Technical Reports Server (NTRS)
1982-01-01
The state-of-the-art of multispectral sensing is reviewed and recommendations for future research and development are proposed. specifically, two generic sensor concepts were discussed. One is the multispectral pushbroom sensor utilizing linear array technology which operates in six spectral bands including two in the SWIR region and incorporates capabilities for stereo and crosstrack pointing. The second concept is the imaging spectrometer (IS) which incorporates a dispersive element and area arrays to provide both spectral and spatial information simultaneously. Other key technology areas included very large scale integration and the computer aided design of these devices.
NASA Technical Reports Server (NTRS)
Bell, J. F., III; Arneson, H. M.; Farrand, W. H.; Goetz, W.; Hayes, A. G.; Herkenhoff, K.; Johnson, M. J.; Johnson, J. R.; Joseph, J.; Kinch, K.
2005-01-01
Introduction. The panoramic camera (Pancam) multispectral, stereoscopic imaging systems on the Mars Exploration Rovers Spirit and Opportunity [1] have acquired and downlinked more than 45,000 images (35 Gbits of data) over more than 700 combined sols of operation on Mars as of early January 2005. A large subset of these images were acquired as part of 26 large multispectral and/or broadband "albedo" panoramas (15 on Spirit, 11 on Opportunity) covering large ranges of azimuth (12 spanning 360 ) and designed to characterize major regional color and albedo characteristics of the landing sites and various points along both rover traverses.
Developing Matlab scripts for image analysis and quality assessment
NASA Astrophysics Data System (ADS)
Vaiopoulos, A. D.
2011-11-01
Image processing is a very helpful tool in many fields of modern sciences that involve digital imaging examination and interpretation. Processed images however, often need to be correlated with the original image, in order to ensure that the resulting image fulfills its purpose. Aside from the visual examination, which is mandatory, image quality indices (such as correlation coefficient, entropy and others) are very useful, when deciding which processed image is the most satisfactory. For this reason, a single program (script) was written in Matlab language, which automatically calculates eight indices by utilizing eight respective functions (independent function scripts). The program was tested in both fused hyperspectral (Hyperion-ALI) and multispectral (ALI, Landsat) imagery and proved to be efficient. Indices were found to be in agreement with visual examination and statistical observations.
Optimal optical filters of fluorescence excitation and emission for poultry fecal detection
USDA-ARS?s Scientific Manuscript database
Purpose: An analytic method to design excitation and emission filters of a multispectral fluorescence imaging system is proposed and was demonstrated in an application to poultry fecal inspection. Methods: A mathematical model of a multispectral imaging system is proposed and its system parameters, ...
Leica ADS40 Sensor for Coastal Multispectral Imaging
NASA Technical Reports Server (NTRS)
Craig, John C.
2007-01-01
The Leica ADS40 Sensor as it is used for coastal multispectral imaging is presented. The contents include: 1) Project Area Overview; 2) Leica ADS40 Sensor; 3) Focal Plate Arrangements; 4) Trichroid Filter; 5) Gradient Correction; 6) Image Acquisition; 7) Remote Sensing and ADS40; 8) Band comparisons of Satellite and Airborne Sensors; 9) Impervious Surface Extraction; and 10) Impervious Surface Details.
Multiplex Quantitative Histologic Analysis of Human Breast Cancer Cell Signaling and Cell Fate
2010-05-01
Breast cancer, cell signaling, cell proliferation, histology, image analysis 15. NUMBER OF PAGES - 51 16. PRICE CODE 17. SECURITY CLASSIFICATION...revealed by individual stains in multiplex combinations; and (3) software (FARSIGHT) for automated multispectral image analysis that (i) segments...Task 3. Develop computational algorithms for multispectral immunohistological image analysis FARSIGHT software was developed to quantify intrinsic
Multispectral LiDAR Data for Land Cover Classification of Urban Areas
Morsy, Salem; Shaker, Ahmed; El-Rabbany, Ahmed
2017-01-01
Airborne Light Detection And Ranging (LiDAR) systems usually operate at a monochromatic wavelength measuring the range and the strength of the reflected energy (intensity) from objects. Recently, multispectral LiDAR sensors, which acquire data at different wavelengths, have emerged. This allows for recording of a diversity of spectral reflectance from objects. In this context, we aim to investigate the use of multispectral LiDAR data in land cover classification using two different techniques. The first is image-based classification, where intensity and height images are created from LiDAR points and then a maximum likelihood classifier is applied. The second is point-based classification, where ground filtering and Normalized Difference Vegetation Indices (NDVIs) computation are conducted. A dataset of an urban area located in Oshawa, Ontario, Canada, is classified into four classes: buildings, trees, roads and grass. An overall accuracy of up to 89.9% and 92.7% is achieved from image classification and 3D point classification, respectively. A radiometric correction model is also applied to the intensity data in order to remove the attenuation due to the system distortion and terrain height variation. The classification process is then repeated, and the results demonstrate that there are no significant improvements achieved in the overall accuracy. PMID:28445432
Multispectral LiDAR Data for Land Cover Classification of Urban Areas.
Morsy, Salem; Shaker, Ahmed; El-Rabbany, Ahmed
2017-04-26
Airborne Light Detection And Ranging (LiDAR) systems usually operate at a monochromatic wavelength measuring the range and the strength of the reflected energy (intensity) from objects. Recently, multispectral LiDAR sensors, which acquire data at different wavelengths, have emerged. This allows for recording of a diversity of spectral reflectance from objects. In this context, we aim to investigate the use of multispectral LiDAR data in land cover classification using two different techniques. The first is image-based classification, where intensity and height images are created from LiDAR points and then a maximum likelihood classifier is applied. The second is point-based classification, where ground filtering and Normalized Difference Vegetation Indices (NDVIs) computation are conducted. A dataset of an urban area located in Oshawa, Ontario, Canada, is classified into four classes: buildings, trees, roads and grass. An overall accuracy of up to 89.9% and 92.7% is achieved from image classification and 3D point classification, respectively. A radiometric correction model is also applied to the intensity data in order to remove the attenuation due to the system distortion and terrain height variation. The classification process is then repeated, and the results demonstrate that there are no significant improvements achieved in the overall accuracy.
Status of the Landsat thematic mapper and multispectral scanner archive conversion system
Werner, Darla J.
1993-01-01
The U.S. Geological Survey's EROS Data Center (EDC) manages the National Satellite Land Remote Sensing Data Archive. This archive includes Landsat thematic mapper (TM) multispectral scanner (MSS) data acquired since 1972. The Landsat archive is an important resource to global change research. To ensure long-term availability of Landsat data from the archive, the EDC specified requirements for a Thematic Mapper and Multispectral Scanner Archive Conversion System (TMACS) that would preserve the data by transcribing it to a more durable medium. In addition to media conversion, hardware and software was installed at EDC in July 1992. In December 1992, the EDC began converting Landsat MSS data from high-density, open reel instrumentation tapes to digital cassette tapes. Almost 320,000 MSS images acquired since 1979 and more than 200,000 TM images acquired since 1982 will be converted to the new medium during the next 3 years. During the media conversion process, several high-density tapes have exhibited severe binder degradation. Even though these tapes have been stored in environmentally controlled conditions, hydrolysis has occurred, resulting in "sticky oxide shed". Using a thermostatically controlled oven built at EDC, tape "baking" has been 100 percent successful and actually improves the quality of some images.
A Comparative Study of Land Cover Classification by Using Multispectral and Texture Data
Qadri, Salman; Khan, Dost Muhammad; Ahmad, Farooq; Qadri, Syed Furqan; Babar, Masroor Ellahi; Shahid, Muhammad; Ul-Rehman, Muzammil; Razzaq, Abdul; Shah Muhammad, Syed; Fahad, Muhammad; Ahmad, Sarfraz; Pervez, Muhammad Tariq; Naveed, Nasir; Aslam, Naeem; Jamil, Mutiullah; Rehmani, Ejaz Ahmad; Ahmad, Nazir; Akhtar Khan, Naeem
2016-01-01
The main objective of this study is to find out the importance of machine vision approach for the classification of five types of land cover data such as bare land, desert rangeland, green pasture, fertile cultivated land, and Sutlej river land. A novel spectra-statistical framework is designed to classify the subjective land cover data types accurately. Multispectral data of these land covers were acquired by using a handheld device named multispectral radiometer in the form of five spectral bands (blue, green, red, near infrared, and shortwave infrared) while texture data were acquired with a digital camera by the transformation of acquired images into 229 texture features for each image. The most discriminant 30 features of each image were obtained by integrating the three statistical features selection techniques such as Fisher, Probability of Error plus Average Correlation, and Mutual Information (F + PA + MI). Selected texture data clustering was verified by nonlinear discriminant analysis while linear discriminant analysis approach was applied for multispectral data. For classification, the texture and multispectral data were deployed to artificial neural network (ANN: n-class). By implementing a cross validation method (80-20), we received an accuracy of 91.332% for texture data and 96.40% for multispectral data, respectively. PMID:27376088
NASA Technical Reports Server (NTRS)
Trumbull, J. V. A. (Principal Investigator)
1975-01-01
The author has identified the following significant results. Three Skylab earth resources passes over Puerto Rico and St. Croix on 6 June and 30 November 1973 and 18 January 1974 resulted in color photography and multispectral photography and scanner imagery. Bathymetric and turbid water features are differentiable by use of the multispectral data. Photography allows mapping of coral reefs, offshore sand deposits, areas of coastal erosion, and patterns of sediment transport. Bottom sediment types could not be differentiated. Patterns of bottom dwelling biologic communities are well portrayed but are difficult to differentiate from bathymetric detail. Effluent discharges and oil slicks are readily detected and are differentiated from other phenomena by the persistence of their images into the longer wavelength multispectral bands.
Multispectral imaging reveals biblical-period inscription unnoticed for half a century
Cordonsky, Michael; Levin, David; Moinester, Murray; Sass, Benjamin; Turkel, Eli; Piasetzky, Eli; Finkelstein, Israel
2017-01-01
Most surviving biblical period Hebrew inscriptions are ostraca—ink-on-clay texts. They are poorly preserved and once unearthed, fade rapidly. Therefore, proper and timely documentation of ostraca is essential. Here we show a striking example of a hitherto invisible text on the back side of an ostracon revealed via multispectral imaging. This ostracon, found at the desert fortress of Arad and dated to ca. 600 BCE (the eve of Judah’s destruction by Nebuchadnezzar), has been on display for half a century. Its front side has been thoroughly studied, while its back side was considered blank. Our research revealed three lines of text on the supposedly blank side and four "new" lines on the front side. Our results demonstrate the need for multispectral image acquisition for both sides of all ancient ink ostraca. Moreover, in certain cases we recommend employing multispectral techniques for screening newly unearthed ceramic potsherds prior to disposal. PMID:28614416
Multispectral imaging reveals biblical-period inscription unnoticed for half a century.
Faigenbaum-Golovin, Shira; Mendel-Geberovich, Anat; Shaus, Arie; Sober, Barak; Cordonsky, Michael; Levin, David; Moinester, Murray; Sass, Benjamin; Turkel, Eli; Piasetzky, Eli; Finkelstein, Israel
2017-01-01
Most surviving biblical period Hebrew inscriptions are ostraca-ink-on-clay texts. They are poorly preserved and once unearthed, fade rapidly. Therefore, proper and timely documentation of ostraca is essential. Here we show a striking example of a hitherto invisible text on the back side of an ostracon revealed via multispectral imaging. This ostracon, found at the desert fortress of Arad and dated to ca. 600 BCE (the eve of Judah's destruction by Nebuchadnezzar), has been on display for half a century. Its front side has been thoroughly studied, while its back side was considered blank. Our research revealed three lines of text on the supposedly blank side and four "new" lines on the front side. Our results demonstrate the need for multispectral image acquisition for both sides of all ancient ink ostraca. Moreover, in certain cases we recommend employing multispectral techniques for screening newly unearthed ceramic potsherds prior to disposal.
Gaddis, L.R.; Kirk, R.L.; Johnson, J. R.; Soderblom, L.A.; Ward, A.W.; Barrett, J.; Becker, K.; Decker, T.; Blue, J.; Cook, D.; Eliason, E.; Hare, T.; Howington-Kraus, E.; Isbell, C.; Lee, E.M.; Redding, B.; Sucharski, R.; Sucharski, T.; Smith, P.H.; Britt, D.T.
1999-01-01
The Imager for Mars Pathfinder (IMP) acquired more than 16,000 images and provided panoramic views of the surface of Mars at the Mars Pathfinder landing site in Ares Vallis. This paper describes the stereoscopic, multispectral IMP imaging sequences and focuses on their use for digital mapping of the landing site and for deriving cartographic products to support science applications of these data. Two-dimensional cartographic processing of IMP data, as performed via techniques and specialized software developed for ISIS (the U.S.Geological Survey image processing software package), is emphasized. Cartographic processing of IMP data includes ingestion, radiometric correction, establishment of geometric control, coregistration of multiple bands, reprojection, and mosaicking. Photogrammetric processing, an integral part of this cartographic work which utilizes the three-dimensional character of the IMP data, supplements standard processing with geometric control and topographic information [Kirk et al., this issue]. Both cartographic and photogrammetric processing are required for producing seamless image mosaics and for coregistering the multispectral IMP data. Final, controlled IMP cartographic products include spectral cubes, panoramic (360?? azimuthal coverage) and planimetric (top view) maps, and topographic data, to be archived on four CD-ROM volumes. Uncontrolled and semicontrolled versions of these products were used to support geologic characterization of the landing site during the nominal and extended missions. Controlled products have allowed determination of the topography of the landing site and environs out to ???60 m, and these data have been used to unravel the history of large- and small-scale geologic processes which shaped the observed landing site. We conclude by summarizing several lessons learned from cartographic processing of IMP data. Copyright 1999 by the American Geophysical Union.
Liu, Changhong; Liu, Wei; Lu, Xuzhong; Ma, Fei; Chen, Wei; Yang, Jianbo; Zheng, Lei
2014-01-01
Multispectral imaging with 19 wavelengths in the range of 405–970 nm has been evaluated for nondestructive determination of firmness, total soluble solids (TSS) content and ripeness stage in strawberry fruit. Several analysis approaches, including partial least squares (PLS), support vector machine (SVM) and back propagation neural network (BPNN), were applied to develop theoretical models for predicting the firmness and TSS of intact strawberry fruit. Compared with PLS and SVM, BPNN considerably improved the performance of multispectral imaging for predicting firmness and total soluble solids content with the correlation coefficient (r) of 0.94 and 0.83, SEP of 0.375 and 0.573, and bias of 0.035 and 0.056, respectively. Subsequently, the ability of multispectral imaging technology to classify fruit based on ripeness stage was tested using SVM and principal component analysis-back propagation neural network (PCA-BPNN) models. The higher classification accuracy of 100% was achieved using SVM model. Moreover, the results of all these models demonstrated that the VIS parts of the spectra were the main contributor to the determination of firmness, TSS content estimation and classification of ripeness stage in strawberry fruit. These results suggest that multispectral imaging, together with suitable analysis model, is a promising technology for rapid estimation of quality attributes and classification of ripeness stage in strawberry fruit. PMID:24505317
NASA Astrophysics Data System (ADS)
Kim, Manjae; Kim, Sewoong; Hwang, Minjoo; Kim, Jihun; Je, Minkyu; Jang, Jae Eun; Lee, Dong Hun; Hwang, Jae Youn
2017-02-01
To date, the incident rates of various skin diseases have increased due to hereditary and environmental factors including stress, irregular diet, pollution, etc. Among these skin diseases, seborrheic dermatitis and psoriasis are a chronic/relapsing dermatitis involving infection and temporary alopecia. However, they typically exhibit similar symptoms, thus resulting in difficulty in discrimination between them. To prevent their associated complications and appropriate treatments for them, it is crucial to discriminate between seborrheic dermatitis and psoriasis with high specificity and sensitivity and further continuously/quantitatively to monitor the skin lesions during their treatment at other locations besides a hospital. Thus, we here demonstrate a mobile multispectral imaging system connected to a smartphone for selfdiagnosis of seborrheic dermatitis and further discrimination between seborrheic dermatitis and psoriasis on the scalp, which is the more challenging case. Using the system developed, multispectral imaging and analysis of seborrheic dermatitis and psoriasis on the scalp was carried out. It was here found that the spectral signatures of seborrheic dermatitis and psoriasis were discernable and thus seborrheic dermatitis on the scalp could be distinguished from psoriasis by using the system. In particular, the smartphone-based multispectral imaging and analysis moreover offered better discrimination between seborrheic dermatitis and psoriasis than the RGB imaging and analysis. These results suggested that the multispectral imaging system based on a smartphone has the potential for self-diagnosis of seborrheic dermatitis with high portability and specificity.
Application and evaluation of ISVR method in QuickBird image fusion
NASA Astrophysics Data System (ADS)
Cheng, Bo; Song, Xiaolu
2014-05-01
QuickBird satellite images are widely used in many fields, and applications have put forward high requirements for the integration of the spatial information and spectral information of the imagery. A fusion method for high resolution remote sensing images based on ISVR is identified in this study. The core principle of ISVS is taking the advantage of radicalization targeting to remove the effect of different gain and error of satellites' sensors. Transformed from DN to radiance, the multi-spectral image's energy is used to simulate the panchromatic band. The linear regression analysis is carried through the simulation process to find a new synthetically panchromatic image, which is highly linearly correlated to the original panchromatic image. In order to evaluate, test and compare the algorithm results, this paper used ISVR and other two different fusion methods to give a comparative study of the spatial information and spectral information, taking the average gradient and the correlation coefficient as an indicator. Experiments showed that this method could significantly improve the quality of fused image, especially in preserving spectral information, to maximize the spectral information of original multispectral images, while maintaining abundant spatial information.
NASA Astrophysics Data System (ADS)
Hinnrichs, Michele
2012-06-01
Using diffractive micro-lenses configured in an array and placed in close proximity to the focal plane array will enable a small compact simultaneous multispectral imaging camera. This approach can be applied to spectral regions from the ultraviolet (UV) to the long-wave infrared (LWIR). The number of simultaneously imaged spectral bands is determined by the number of individually configured diffractive optical micro-lenses (lenslet) in the array. Each lenslet images at a different wavelength determined by the blaze and set at the time of manufacturing based on application. In addition, modulation of the focal length of the lenslet array with piezoelectric or electro-static actuation will enable spectral band fill-in allowing hyperspectral imaging. Using the lenslet array with dual-band detectors will increase the number of simultaneous spectral images by a factor of two when utilizing multiple diffraction orders. Configurations and concept designs will be presented for detection application for biological/chemical agents, buried IED's and reconnaissance. The simultaneous detection of multiple spectral images in a single frame of data enhances the image processing capability by eliminating temporal differences between colors and enabling a handheld instrument that is insensitive to motion.
Omucheni, Dickson L; Kaduki, Kenneth A; Bulimo, Wallace D; Angeyo, Hudson K
2014-12-11
Multispectral imaging microscopy is a novel microscopic technique that integrates spectroscopy with optical imaging to record both spectral and spatial information of a specimen. This enables acquisition of a large and more informative dataset than is achievable in conventional optical microscopy. However, such data are characterized by high signal correlation and are difficult to interpret using univariate data analysis techniques. In this work, the development and application of a novel method which uses principal component analysis (PCA) in the processing of spectral images obtained from a simple multispectral-multimodal imaging microscope to detect Plasmodium parasites in unstained thin blood smear for malaria diagnostics is reported. The optical microscope used in this work has been modified by replacing the broadband light source (tungsten halogen lamp) with a set of light emitting diodes (LEDs) emitting thirteen different wavelengths of monochromatic light in the UV-vis-NIR range. The LEDs are activated sequentially to illuminate same spot of the unstained thin blood smears on glass slides, and grey level images are recorded at each wavelength. PCA was used to perform data dimensionality reduction and to enhance score images for visualization as well as for feature extraction through clusters in score space. Using this approach, haemozoin was uniquely distinguished from haemoglobin in unstained thin blood smears on glass slides and the 590-700 spectral range identified as an important band for optical imaging of haemozoin as a biomarker for malaria diagnosis. This work is of great significance in reducing the time spent on staining malaria specimens and thus drastically reducing diagnosis time duration. The approach has the potential of replacing a trained human eye with a trained computerized vision system for malaria parasite blood screening.
Multispectral and colour analysis for ubiquinone solutions and biological samples
NASA Astrophysics Data System (ADS)
Timofeeva, Elvira O.; Gorbunova, Elena V.; Chertov, Aleksandr N.
2017-02-01
An oxidative damage in cell structures is a basis of most mechanisms that lead to health diseases and senescence of human body. The presence of antioxidant issues such as redox potential imbalance in human body is a very important question for modern clinical diagnostics. Implementation of multispectral and colour analysis of the human skin into optical diagnostics of such wide distributed in a human body antioxidant as ubiquinone can be one of the steps for development of the device with a view to clinical diagnostics of redox potential or quality control of the cosmetics. The recording of multispectral images of the hand skin with monochromatic camera and a set of coloured filters was provided in the current research. Recording data of the multispectral imaging technique was processed using principal component analysis. Also colour characteristics of the skin before and after the skin treatment with facial mask which contains ubiquinone were calculated. The results of the mask treatment were compared with the treatment using oily ubiquinone solution. Despite the fact that results did not give clear explanation about healthy skin or skin stressed by reactive oxygen species, methods which were described in this research are able to identify how skin surface is changing after the antioxidant treatment. In future it is important to provide biomedical tests during the optical tests of the human skin.
Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.
2002-01-01
Enhanced false color images from mid-IR, near-IR (NIR), and visible bands of the Landsat thematic mapper (TM) are commonly used for visually interpreting land cover type. Described here is a technique for sharpening or fusion of NIR with higher resolution panchromatic (Pan) that uses a shift-invariant implementation of the discrete wavelet transform (SIDWT) and a reported pixel-based selection rule to combine coefficients. There can be contrast reversals (e.g., at soil-vegetation boundaries between NIR and visible band images) and consequently degraded sharpening and edge artifacts. To improve performance for these conditions, I used a local area-based correlation technique originally reported for comparing image-pyramid-derived edges for the adaptive processing of wavelet-derived edge data. Also, using the redundant data of the SIDWT improves edge data generation. There is additional improvement because sharpened subband imagery is used with the edge-correlation process. A reported technique for sharpening three-band spectral imagery used forward and inverse intensity, hue, and saturation transforms and wavelet-based sharpening of intensity. This technique had limitations with opposite contrast data, and in this study sharpening was applied to single-band multispectral-Pan image pairs. Sharpening used simulated 30-m NIR imagery produced by degrading the spatial resolution of a higher resolution reference. Performance, evaluated by comparison between sharpened and reference image, was improved when sharpened subband data were used with the edge correlation.
Optimal wavelength band clustering for multispectral iris recognition.
Gong, Yazhuo; Zhang, David; Shi, Pengfei; Yan, Jingqi
2012-07-01
This work explores the possibility of clustering spectral wavelengths based on the maximum dissimilarity of iris textures. The eventual goal is to determine how many bands of spectral wavelengths will be enough for iris multispectral fusion and to find these bands that will provide higher performance of iris multispectral recognition. A multispectral acquisition system was first designed for imaging the iris at narrow spectral bands in the range of 420 to 940 nm. Next, a set of 60 human iris images that correspond to the right and left eyes of 30 different subjects were acquired for an analysis. Finally, we determined that 3 clusters were enough to represent the 10 feature bands of spectral wavelengths using the agglomerative clustering based on two-dimensional principal component analysis. The experimental results suggest (1) the number, center, and composition of clusters of spectral wavelengths and (2) the higher performance of iris multispectral recognition based on a three wavelengths-bands fusion.
Feng, Lei; Fang, Hui; Zhou, Wei-Jun; Huang, Min; He, Yong
2006-09-01
Site-specific variable nitrogen application is one of the major precision crop production management operations. Obtaining sufficient crop nitrogen stress information is essential for achieving effective site-specific nitrogen applications. The present paper describes the development of a multi-spectral nitrogen deficiency sensor, which uses three channels (green, red, near-infrared) of crop images to determine the nitrogen level of canola. This sensor assesses the nitrogen stress by means of estimated SPAD value of the canola based on canola canopy reflectance sensed using three channels (green, red, near-infrared) of the multi-spectral camera. The core of this investigation is the calibration methods between the multi-spectral references and the nitrogen levels in crops measured using a SPAD 502 chlorophyll meter. Based on the results obtained from this study, it can be concluded that a multi-spectral CCD camera can provide sufficient information to perform reasonable SPAD values estimation during field operations.
Effects of spatial resolution ratio in image fusion
Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.
2008-01-01
In image fusion, the spatial resolution ratio can be defined as the ratio between the spatial resolution of the high-resolution panchromatic image and that of the low-resolution multispectral image. This paper attempts to assess the effects of the spatial resolution ratio of the input images on the quality of the fused image. Experimental results indicate that a spatial resolution ratio of 1:10 or higher is desired for optimal multisensor image fusion provided the input panchromatic image is not downsampled to a coarser resolution. Due to the synthetic pixels generated from resampling, the quality of the fused image decreases as the spatial resolution ratio decreases (e.g. from 1:10 to 1:30). However, even with a spatial resolution ratio as small as 1:30, the quality of the fused image is still better than the original multispectral image alone for feature interpretation. In cases where the spatial resolution ratio is too small (e.g. 1:30), to obtain better spectral integrity of the fused image, one may downsample the input high-resolution panchromatic image to a slightly lower resolution before fusing it with the multispectral image.
NASA Astrophysics Data System (ADS)
Krauß, T.
2014-11-01
The focal plane assembly of most pushbroom scanner satellites is built up in a way that different multispectral or multispectral and panchromatic bands are not all acquired exactly at the same time. This effect is due to offsets of some millimeters of the CCD-lines in the focal plane. Exploiting this special configuration allows the detection of objects moving during this small time span. In this paper we present a method for automatic detection and extraction of moving objects - mainly traffic - from single very high resolution optical satellite imagery of different sensors. The sensors investigated are WorldView-2, RapidEye, Pléiades and also the new SkyBox satellites. Different sensors require different approaches for detecting moving objects. Since the objects are mapped on different positions only in different spectral bands also the change of spectral properties have to be taken into account. In case the main distance in the focal plane is between the multispectral and the panchromatic CCD-line like for Pléiades an approach for weighted integration to receive mostly identical images is investigated. Other approaches for RapidEye and WorldView-2 are also shown. From these intermediate bands difference images are calculated and a method for detecting the moving objects from these difference images is proposed. Based on these presented methods images from different sensors are processed and the results are assessed for detection quality - how many moving objects can be detected, how many are missed - and accuracy - how accurate is the derived speed and size of the objects. Finally the results are discussed and an outlook for possible improvements towards operational processing is presented.
Distant Determination of Bilirubin Distribution in Skin by Multi-Spectral Imaging
NASA Astrophysics Data System (ADS)
Saknite, I.; Jakovels, D.; Spigulis, J.
2011-01-01
For mapping the bilirubin distribution in bruised skin the multi-spectral imaging technique was employed, which made it possible to observe temporal changes of the bilirubin content in skin photo-types II and III. The obtained results confirm the clinical potential of this technique for skin bilirubin diagnostics.
Multispectral fluorescence image algorithms for detection of frass on mature tomatoes
USDA-ARS?s Scientific Manuscript database
A multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet LED excitation was developed for the detection of frass contamination on mature tomatoes. The algorithm utilized the fluorescence intensities at five wavebands, 515 nm, 640 nm, 664 nm, 690 nm, and 724 nm...
Analysis of variograms with various sample sizes from a multispectral image
USDA-ARS?s Scientific Manuscript database
Variograms play a crucial role in remote sensing application and geostatistics. In this study, the analysis of variograms with various sample sizes of remotely sensed data was conducted. A 100 X 100 pixel subset was chosen from an aerial multispectral image which contained three wavebands, green, ...
NASA Astrophysics Data System (ADS)
Miecznik, Grzegorz; Shafer, Jeff; Baugh, William M.; Bader, Brett; Karspeck, Milan; Pacifici, Fabio
2017-05-01
WorldView-3 (WV-3) is a DigitalGlobe commercial, high resolution, push-broom imaging satellite with three instruments: visible and near-infrared VNIR consisting of panchromatic (0.3m nadir GSD) plus multi-spectral (1.2m), short-wave infrared SWIR (3.7m), and multi-spectral CAVIS (30m). Nine VNIR bands, which are on one instrument, are nearly perfectly registered to each other, whereas eight SWIR bands, belonging to the second instrument, are misaligned with respect to VNIR and to each other. Geometric calibration and ortho-rectification results in a VNIR/SWIR alignment which is accurate to approximately 0.75 SWIR pixel at 3.7m GSD, whereas inter-SWIR, band to band registration is 0.3 SWIR pixel. Numerous high resolution, spectral applications, such as object classification and material identification, require more accurate registration, which can be achieved by utilizing image processing algorithms, for example Mutual Information (MI). Although MI-based co-registration algorithms are highly accurate, implementation details for automated processing can be challenging. One particular challenge is how to compute bin widths of intensity histograms, which are fundamental building blocks of MI. We solve this problem by making the bin widths proportional to instrument shot noise. Next, we show how to take advantage of multiple VNIR bands, and improve registration sensitivity to image alignment. To meet this goal, we employ Canonical Correlation Analysis, which maximizes VNIR/SWIR correlation through an optimal linear combination of VNIR bands. Finally we explore how to register images corresponding to different spatial resolutions. We show that MI computed at a low-resolution grid is more sensitive to alignment parameters than MI computed at a high-resolution grid. The proposed modifications allow us to improve VNIR/SWIR registration to better than ¼ of a SWIR pixel, as long as terrain elevation is properly accounted for, and clouds and water are masked out.
Wachman, Elliot S; Geyer, Stanley J; Recht, Joel M; Ward, Jon; Zhang, Bill; Reed, Murray; Pannell, Chris
2014-05-01
An acousto-optic tunable filter (AOTF)-based multispectral imaging microscope system allows the combination of cellular morphology and multiple biomarker stainings on a single microscope slide. We describe advances in AOTF technology that have greatly improved spectral purity, field uniformity, and image quality. A multispectral imaging bright field microscope using these advances demonstrates pathology results that have great potential for clinical use.
Automated simultaneous multiple feature classification of MTI data
NASA Astrophysics Data System (ADS)
Harvey, Neal R.; Theiler, James P.; Balick, Lee K.; Pope, Paul A.; Szymanski, John J.; Perkins, Simon J.; Porter, Reid B.; Brumby, Steven P.; Bloch, Jeffrey J.; David, Nancy A.; Galassi, Mark C.
2002-08-01
Los Alamos National Laboratory has developed and demonstrated a highly capable system, GENIE, for the two-class problem of detecting a single feature against a background of non-feature. In addition to the two-class case, however, a commonly encountered remote sensing task is the segmentation of multispectral image data into a larger number of distinct feature classes or land cover types. To this end we have extended our existing system to allow the simultaneous classification of multiple features/classes from multispectral data. The technique builds on previous work and its core continues to utilize a hybrid evolutionary-algorithm-based system capable of searching for image processing pipelines optimized for specific image feature extraction tasks. We describe the improvements made to the GENIE software to allow multiple-feature classification and describe the application of this system to the automatic simultaneous classification of multiple features from MTI image data. We show the application of the multiple-feature classification technique to the problem of classifying lava flows on Mauna Loa volcano, Hawaii, using MTI image data and compare the classification results with standard supervised multiple-feature classification techniques.
Geometric calibration of lens and filter distortions for multispectral filter-wheel cameras.
Brauers, Johannes; Aach, Til
2011-02-01
High-fidelity color image acquisition with a multispectral camera utilizes optical filters to separate the visible electromagnetic spectrum into several passbands. This is often realized with a computer-controlled filter wheel, where each position is equipped with an optical bandpass filter. For each filter wheel position, a grayscale image is acquired and the passbands are finally combined to a multispectral image. However, the different optical properties and non-coplanar alignment of the filters cause image aberrations since the optical path is slightly different for each filter wheel position. As in a normal camera system, the lens causes additional wavelength-dependent image distortions called chromatic aberrations. When transforming the multispectral image with these aberrations into an RGB image, color fringes appear, and the image exhibits a pincushion or barrel distortion. In this paper, we address both the distortions caused by the lens and by the filters. Based on a physical model of the bandpass filters, we show that the aberrations caused by the filters can be modeled by displaced image planes. The lens distortions are modeled by an extended pinhole camera model, which results in a remaining mean calibration error of only 0.07 pixels. Using an absolute calibration target, we then geometrically calibrate each passband and compensate for both lens and filter distortions simultaneously. We show that both types of aberrations can be compensated and present detailed results on the remaining calibration errors.
Image processing using Gallium Arsenide (GaAs) technology
NASA Technical Reports Server (NTRS)
Miller, Warner H.
1989-01-01
The need to increase the information return from space-borne imaging systems has increased in the past decade. The use of multi-spectral data has resulted in the need for finer spatial resolution and greater spectral coverage. Onboard signal processing will be necessary in order to utilize the available Tracking and Data Relay Satellite System (TDRSS) communication channel at high efficiency. A generally recognized approach to the increased efficiency of channel usage is through data compression techniques. The compression technique implemented is a differential pulse code modulation (DPCM) scheme with a non-uniform quantizer. The need to advance the state-of-the-art of onboard processing was recognized and a GaAs integrated circuit technology was chosen. An Adaptive Programmable Processor (APP) chip set was developed which is based on an 8-bit slice general processor. The reason for choosing the compression technique for the Multi-spectral Linear Array (MLA) instrument is described. Also a description is given of the GaAs integrated circuit chip set which will demonstrate that data compression can be performed onboard in real time at data rate in the order of 500 Mb/s.
NASA Technical Reports Server (NTRS)
1977-01-01
Application and processing of remotely sensed data are discussed. Areas of application include: pollution monitoring, water quality, land use, marine resources, ocean surface properties, and agriculture. Image processing and scene analysis are described along with automated photointerpretation and classification techniques. Data from infrared and multispectral band scanners onboard LANDSAT satellites are emphasized.
Spectrum slicer for snapshot spectral imaging
NASA Astrophysics Data System (ADS)
Tamamitsu, Miu; Kitagawa, Yutaro; Nakagawa, Keiichi; Horisaki, Ryoichi; Oishi, Yu; Morita, Shin-ya; Yamagata, Yutaka; Motohara, Kentaro; Goda, Keisuke
2015-12-01
We propose and demonstrate an optical component that overcomes critical limitations in our previously demonstrated high-speed multispectral videography-a method in which an array of periscopes placed in a prism-based spectral shaper is used to achieve snapshot multispectral imaging with the frame rate only limited by that of an image-recording sensor. The demonstrated optical component consists of a slicing mirror incorporated into a 4f-relaying lens system that we refer to as a spectrum slicer (SS). With its simple design, we can easily increase the number of spectral channels without adding fabrication complexity while preserving the capability of high-speed multispectral videography. We present a theoretical framework for the SS and its experimental utility to spectral imaging by showing real-time monitoring of a dynamic colorful event through five different visible windows.
CRISM's Global Mapping of Mars, Part 1
NASA Technical Reports Server (NTRS)
2007-01-01
After a year in Mars orbit, CRISM has taken enough images to allow the team to release the first parts of a global spectral map of Mars to the Planetary Data System (PDS), NASA's digital library of planetary data. CRISM's global mapping is called the 'multispectral survey.' The team uses the word 'survey' because a reason for gathering this data set is to search for new sites for targeted observations, high-resolution views of the surface at 18 meters per pixel in 544 colors. Another reason for the multispectral survey is to provide contextual information. Targeted observations have such a large data volume (about 200 megabytes apiece) that only about 1% of Mars can be imaged at CRISM's highest resolution. The multispectral survey is a lower data volume type of observation that fills in the gaps between targeted observations, allowing scientists to better understand their geologic context. The global map is built from tens of thousands of image strips each about 10 kilometers (6.2 miles) wide and thousands of kilometers long. During the multispectral survey, CRISM returns data from only 72 carefully selected wavelengths that cover absorptions indicative of the mineral groups that CRISM is looking for on Mars. Data volume is further decreased by binning image pixels inside the instrument to a scale of about 200 meters (660 feet) per pixel. The total reduction in data volume per square kilometer is a factor of 700, making the multispectral survey manageable to acquire and transmit to Earth. Once on the ground, the strips of data are mosaicked into maps. The multispectral survey is too large to show the whole planet in a single map, so the map is divided into 1,964 'tiles,' each about 300 kilometers (186 miles) across. There are three versions of each tile, processed to progressively greater levels to strip away the obscuring effects of the dusty atmosphere and to highlight mineral variations in surface materials. This is the first version of tile 750, one of 209 tiles just delivered to the PDS. It shows a part of the planet called Tyrrhena Terra in the ancient, heavily cratered highlands. The colored strips are CRISM multispectral survey data acquired over several months, in which each pixel has a calibrated 72-color spectrum of Mars. The three wavelengths shown are 2.53, 1.50, and 1.08 micrometers in the red, green, and blue image planes respectively. At these wavelengths, rocky areas appear brown, dusty areas appear tan, and regions with hazy atmosphere appear bluish. Note that there is a large difference in brightness between strips, because there is no correction for the lighting conditions at the time of each observation. The gray areas between the strips are from an earlier mosaic of the planet taken by the Thermal Emission Imaging System (THEMIS) instrument on Mars Odyssey, and are included only for context. Ultimately the multispectral survey will cover nearly all of this area. CRISM is one of six science instruments on NASA's Mars Reconnaissance Orbiter. Led by The Johns Hopkins University Applied Physics Laboratory, Laurel, Md., the CRISM team includes expertise from universities, government agencies and small businesses in the United States and abroad. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter and the Mars Science Laboratory for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, built the orbiter.CRISM's Global Mapping of Mars, Part 2
NASA Technical Reports Server (NTRS)
2007-01-01
After a year in Mars orbit, CRISM has taken enough images to allow the team to release the first parts of a global spectral map of Mars to the Planetary Data System (PDS), NASA's digital library of planetary data. CRISM's global mapping is called the 'multispectral survey.' The team uses the word 'survey' because a reason for gathering this data set is to search for new sites for targeted observations, high-resolution views of the surface at 18 meters per pixel in 544 colors. Another reason for the multispectral survey is to provide contextual information. Targeted observations have such a large data volume (about 200 megabytes apiece) that only about 1% of Mars can be imaged at CRISM's highest resolution. The multispectral survey is a lower data volume type of observation that fills in the gaps between targeted observations, allowing scientists to better understand their geologic context. The global map is built from tens of thousands of image strips each about 10 kilometers (6.2 miles) wide and thousands of kilometers long. During the multispectral survey, CRISM returns data from only 72 carefully selected wavelengths that cover absorptions indicative of the mineral groups that CRISM is looking for on Mars. Data volume is further decreased by binning image pixels inside the instrument to a scale of about 200 meters (660 feet) per pixel. The total reduction in data volume per square kilometer is a factor of 700, making the multispectral survey manageable to acquire and transmit to Earth. Once on the ground, the strips of data are mosaicked into maps. The multispectral survey is too large to show the whole planet in a single map, so the map is divided into 1,964 'tiles,' each about 300 kilometers (186 miles) across. There are three versions of each tile, processed to progressively greater levels to strip away the obscuring effects of the dusty atmosphere and to highlight mineral variations in surface materials. This is the first version of tile 750, one of 209 tiles just delivered to the PDS. It shows a part of the planet called Tyrrhena Terra in the ancient, heavily cratered highlands. The colored strips are CRISM multispectral survey data acquired over several months, in which each pixel has a calibrated 72-color spectrum of Mars. The three wavelengths shown are 2.53, 1.50, and 1.08 micrometers in the red, green, and blue image planes respectively. At these wavelengths, rocky areas appear brown, dusty areas appear tan, and regions with hazy atmosphere appear bluish. Note that there is a large difference in brightness between strips, because there is no correction for the lighting conditions at the time of each observation. The gray areas between the strips are from an earlier mosaic of the planet taken by the Thermal Emission Imaging System (THEMIS) instrument on Mars Odyssey, and are included only for context. Ultimately the multispectral survey will cover nearly all of this area. CRISM is one of six science instruments on NASA's Mars Reconnaissance Orbiter. Led by The Johns Hopkins University Applied Physics Laboratory, Laurel, Md., the CRISM team includes expertise from universities, government agencies and small businesses in the United States and abroad. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter and the Mars Science Laboratory for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, built the orbiter.Nam, Hyeong Soo; Kang, Woo Jae; Lee, Min Woo; Song, Joon Woo; Kim, Jin Won; Oh, Wang-Yuhl; Yoo, Hongki
2018-01-01
The pathophysiological progression of chronic diseases, including atherosclerosis and cancer, is closely related to compositional changes in biological tissues containing endogenous fluorophores such as collagen, elastin, and NADH, which exhibit strong autofluorescence under ultraviolet excitation. Fluorescence lifetime imaging (FLIm) provides robust detection of the compositional changes by measuring fluorescence lifetime, which is an inherent property of a fluorophore. In this paper, we present a dual-modality system combining a multispectral analog-mean-delay (AMD) FLIm and a high-speed swept-source optical coherence tomography (OCT) to simultaneously visualize the cross-sectional morphology and biochemical compositional information of a biological tissue. Experiments using standard fluorescent solutions showed that the fluorescence lifetime could be measured with a precision of less than 40 psec using the multispectral AMD-FLIm without averaging. In addition, we performed ex vivo imaging on rabbit iliac normal-looking and atherosclerotic specimens to demonstrate the feasibility of the combined FLIm-OCT system for atherosclerosis imaging. We expect that the combined FLIm-OCT will be a promising next-generation imaging technique for diagnosing atherosclerosis and cancer due to the advantages of the proposed label-free high-precision multispectral lifetime measurement. PMID:29675330
USDA-ARS?s Scientific Manuscript database
The amount of visible and near infrared light reflected by plants varies depending on their health. In this study, multispectral images were acquired by quadcopter for detecting tomato spot wilt virus amongst twenty genetic varieties of peanuts. The plants were visually assessed to acquire ground ...
USDA-ARS?s Scientific Manuscript database
This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...
NASA Astrophysics Data System (ADS)
Setiyoko, A.; Dharma, I. G. W. S.; Haryanto, T.
2017-01-01
Multispectral data and hyperspectral data acquired from satellite sensor have the ability in detecting various objects on the earth ranging from low scale to high scale modeling. These data are increasingly being used to produce geospatial information for rapid analysis by running feature extraction or classification process. Applying the most suited model for this data mining is still challenging because there are issues regarding accuracy and computational cost. This research aim is to develop a better understanding regarding object feature extraction and classification applied for satellite image by systematically reviewing related recent research projects. A method used in this research is based on PRISMA statement. After deriving important points from trusted sources, pixel based and texture-based feature extraction techniques are promising technique to be analyzed more in recent development of feature extraction and classification.
NASA Technical Reports Server (NTRS)
Sabine, Charles; Realmuto, Vincent J.; Taranik, James V.
1994-01-01
We have produced images that quantitatively depict modal and chemical parameters of granitoids using an image processing algorithm called MINMAP that fits Gaussian curves to normalized emittance spectra recovered from thermal infrared multispectral scanner (TIMS) radiance data. We applied the algorithm to TIMS data from the Desolation Wilderness, an extensively glaciated area near the northern end of the Sierra Nevada batholith that is underlain by Jurassic and Cretaceous plutons that range from diorite and anorthosite to leucogranite. The wavelength corresponding to the calculated emittance minimum lambda(sub min) varies linearly with quartz content, SiO2, and other modal and chemical parameters. Thematic maps of quartz and silica content derived from lambda(sub min) values distinguish bodies of diorite from surrounding granite, identify outcrops of anorthosite, and separate felsic, intermediate, and mafic rocks.
Polarimetric Multispectral Imaging Technology
NASA Technical Reports Server (NTRS)
Cheng, L.-J.; Chao, T.-H.; Dowdy, M.; Mahoney, C.; Reyes, G.
1993-01-01
The Jet Propulsion Laboratory is developing a remote sensing technology on which a new generation of compact, lightweight, high-resolution, low-power, reliable, versatile, programmable scientific polarimetric multispectral imaging instruments can be built to meet the challenge of future planetary exploration missions. The instrument is based on the fast programmable acousto-optic tunable filter (AOTF) of tellurium dioxide (TeO2) that operates in the wavelength range of 0.4-5 microns. Basically, the AOTF multispectral imaging instrument measures incoming light intensity as a function of spatial coordinates, wavelength, and polarization. Its operation can be in either sequential, random access, or multiwavelength mode as required. This provides observation flexibility, allowing real-time alternation among desired observations, collecting needed data only, minimizing data transmission, and permitting implementation of new experiments. These will result in optimization of the mission performance with minimal resources. Recently we completed a polarimetric multispectral imaging prototype instrument and performed outdoor field experiments for evaluating application potentials of the technology. We also investigated potential improvements on AOTF performance to strengthen technology readiness for applications. This paper will give a status report on the technology and a prospect toward future planetary exploration.
Fluorescence multispectral imaging-based diagnostic system for atherosclerosis.
Ho, Cassandra Su Lyn; Horiuchi, Toshikatsu; Taniguchi, Hiroaki; Umetsu, Araya; Hagisawa, Kohsuke; Iwaya, Keiichi; Nakai, Kanji; Azmi, Amalina; Zulaziz, Natasha; Azhim, Azran; Shinomiya, Nariyoshi; Morimoto, Yuji
2016-08-20
Composition of atherosclerotic arterial walls is rich in lipids such as cholesterol, unlike normal arterial walls. In this study, we aimed to utilize this difference to diagnose atherosclerosis via multispectral fluorescence imaging, which allows for identification of fluorescence originating from the substance in the arterial wall. The inner surface of extracted arteries (rabbit abdominal aorta, human coronary artery) was illuminated by 405 nm excitation light and multispectral fluorescence images were obtained. Pathological examination of human coronary artery samples were carried out and thickness of arteries were calculated by measuring combined media and intima thickness. The fluorescence spectra in atherosclerotic sites were different from those in normal sites. Multiple regions of interest (ROI) were selected within each sample and a ratio between two fluorescence intensity differences (where each intensity difference is calculated between an identifier wavelength and a base wavelength) from each ROI was determined, allowing for discrimination of atherosclerotic sites. Fluorescence intensity and thickness of artery were found to be significantly correlated. These results indicate that multispectral fluorescence imaging provides qualitative and quantitative evaluations of atherosclerosis and is therefore a viable method of diagnosing the disease.
NASA Astrophysics Data System (ADS)
Bernat, Amir S.; Bar-Am, Kfir; Cataldo, Leigh; Bolton, Frank J.; Kahn, Bruce S.; Levitz, David
2018-02-01
Cervical cancer is a leading cause of death for women in low resource settings. In order to better detect cervical dysplasia, a low cost multi-spectral colposcope was developed utilizing low costs LEDs and an area scan camera. The device is capable of both traditional colposcopic imaging and multi-spectral image capture. Following initial bench testing, the device was deployed to a gynecology clinic where it was used to image patients in a colposcopy setting. Both traditional colposcopic images and spectral data from patients were uploaded to a cloud server for remote analysis. Multi-spectral imaging ( 30 second capture) took place before any clinical procedure; the standard of care was followed thereafter. If acetic acid was used in the standard of care, a post-acetowhitening colposcopic image was also captured. In analyzing the data, normal and abnormal regions were identified in the colposcopic images by an expert clinician. Spectral data were fit to a theoretical model based on diffusion theory, yielding information on scattering and absorption parameters. Data were grouped according to clinician labeling of the tissue, as well as any additional clinical test results available (Pap, HPV, biopsy). Altogether, N=20 patients were imaged in this study, with 9 of them abnormal. In comparing normal and abnormal regions of interest from patients, substantial differences were measured in blood content, while differences in oxygen saturation parameters were more subtle. These results suggest that optical measurements made using low cost spectral imaging systems can distinguish between normal and pathological tissues.
Pre-Flight Radiometric Model of Linear Imager on LAPAN-IPB Satellite
NASA Astrophysics Data System (ADS)
Hadi Syafrudin, A.; Salaswati, Sartika; Hasbi, Wahyudi
2018-05-01
LAPAN-IPB Satellite is Microsatellite class with mission of remote sensing experiment. This satellite carrying Multispectral Line Imager for captured of radiometric reflectance value from earth to space. Radiometric quality of image is important factor to classification object on remote sensing process. Before satellite launch in orbit or pre-flight, Line Imager have been tested by Monochromator and integrating sphere to get spectral and every pixel radiometric response characteristic. Pre-flight test data with variety setting of line imager instrument used to see correlation radiance input and digital number of images output. Output input correlation is described by the radiance conversion model with imager setting and radiometric characteristics. Modelling process from hardware level until normalize radiance formula are presented and discussed in this paper.
Atmospheric correction for remote sensing image based on multi-spectral information
NASA Astrophysics Data System (ADS)
Wang, Yu; He, Hongyan; Tan, Wei; Qi, Wenwen
2018-03-01
The light collected from remote sensors taken from space must transit through the Earth's atmosphere. All satellite images are affected at some level by lightwave scattering and absorption from aerosols, water vapor and particulates in the atmosphere. For generating high-quality scientific data, atmospheric correction is required to remove atmospheric effects and to convert digital number (DN) values to surface reflectance (SR). Every optical satellite in orbit observes the earth through the same atmosphere, but each satellite image is impacted differently because atmospheric conditions are constantly changing. A physics-based detailed radiative transfer model 6SV requires a lot of key ancillary information about the atmospheric conditions at the acquisition time. This paper investigates to achieve the simultaneous acquisition of atmospheric radiation parameters based on the multi-spectral information, in order to improve the estimates of surface reflectance through physics-based atmospheric correction. Ancillary information on the aerosol optical depth (AOD) and total water vapor (TWV) derived from the multi-spectral information based on specific spectral properties was used for the 6SV model. The experimentation was carried out on images of Sentinel-2, which carries a Multispectral Instrument (MSI), recording in 13 spectral bands, covering a wide range of wavelengths from 440 up to 2200 nm. The results suggest that per-pixel atmospheric correction through 6SV model, integrating AOD and TWV derived from multispectral information, is better suited for accurate analysis of satellite images and quantitative remote sensing application.
NASA Astrophysics Data System (ADS)
Deán-Ben, X. L.; Bay, Erwin; Razansky, Daniel
2015-03-01
Three-dimensional hand-held optoacoustic imaging comes with important advantages that prompt the clinical translation of this modality, with applications envisioned in cardiovascular and peripheral vascular disease, disorders of the lymphatic system, breast cancer, arthritis or inflammation. Of particular importance is the multispectral acquisition of data by exciting the tissue at several wavelengths, which enables functional imaging applications. However, multispectral imaging of entire three-dimensional regions is significantly challenged by motion artefacts in concurrent acquisitions at different wavelengths. A method based on acquisition of volumetric datasets having a microsecond-level delay between pulses at different wavelengths is described in this work. This method can avoid image artefacts imposed by a scanning velocity greater than 2 m/s, thus, does not only facilitate imaging influenced by respiratory, cardiac or other intrinsic fast movements in living tissues, but can achieve artifact-free imaging in the presence of more significant motion, e.g., abrupt displacements during handheld-mode operation in a clinical environment.
Acousto-optic tunable filter chromatic aberration analysis and reduction with auto-focus system
NASA Astrophysics Data System (ADS)
Wang, Yaoli; Chen, Yuanyuan
2018-07-01
An acousto-optic tunable filter (AOTF) displays optical band broadening and sidelobes as a result of the coupling between the acoustic wave and optical waves of different wavelengths. These features were analysed by wave-vector phase matching between the optical and acoustic waves. A crossed-line test board was imaged by an AOTF multi-spectral imaging system, showing image blurring in the direction of diffraction and image sharpness in the orthogonal direction produced by the greater bandwidth and sidelobes in the former direction. Applying the secondary-imaging principle and considering the wavelength-dependent refractive index, focal length varies over the broad wavelength range. An automatic focusing method is therefore proposed for use in AOTF multi-spectral imaging systems. A new method for image-sharpness evaluation, based on improved Structure Similarity Index Measurement (SSIM), is also proposed, based on the characteristics of the AOTF imaging system. Compared with the traditional gradient operator, as same as it, the new evaluation function realized the evaluation between different image quality, thus could achieve the automatic focusing for different multispectral images.
Núñez, Jorge I; Farmer, Jack D; Sellar, R Glenn; Swayze, Gregg A; Blaney, Diana L
2014-02-01
Future astrobiological missions to Mars are likely to emphasize the use of rovers with in situ petrologic capabilities for selecting the best samples at a site for in situ analysis with onboard lab instruments or for caching for potential return to Earth. Such observations are central to an understanding of the potential for past habitable conditions at a site and for identifying samples most likely to harbor fossil biosignatures. The Multispectral Microscopic Imager (MMI) provides multispectral reflectance images of geological samples at the microscale, where each image pixel is composed of a visible/shortwave infrared spectrum ranging from 0.46 to 1.73 μm. This spectral range enables the discrimination of a wide variety of rock-forming minerals, especially Fe-bearing phases, and the detection of hydrated minerals. The MMI advances beyond the capabilities of current microimagers on Mars by extending the spectral range into the infrared and increasing the number of spectral bands. The design employs multispectral light-emitting diodes and an uncooled indium gallium arsenide focal plane array to achieve a very low mass and high reliability. To better understand and demonstrate the capabilities of the MMI for future surface missions to Mars, we analyzed samples from Mars-relevant analog environments with the MMI. Results indicate that the MMI images faithfully resolve the fine-scale microtextural features of samples and provide important information to help constrain mineral composition. The use of spectral endmember mapping reveals the distribution of Fe-bearing minerals (including silicates and oxides) with high fidelity, along with the presence of hydrated minerals. MMI-based petrogenetic interpretations compare favorably with laboratory-based analyses, revealing the value of the MMI for future in situ rover-mediated astrobiological exploration of Mars. Mars-Microscopic imager-Multispectral imaging-Spectroscopy-Habitability-Arm instrument.
NASA Astrophysics Data System (ADS)
Zhao, Shaoshuai; Ni, Chen; Cao, Jing; Li, Zhengqiang; Chen, Xingfeng; Ma, Yan; Yang, Leiku; Hou, Weizhen; Qie, Lili; Ge, Bangyu; Liu, Li; Xing, Jin
2018-03-01
The remote sensing image is usually polluted by atmosphere components especially like aerosol particles. For the quantitative remote sensing applications, the radiative transfer model based atmospheric correction is used to get the reflectance with decoupling the atmosphere and surface by consuming a long computational time. The parallel computing is a solution method for the temporal acceleration. The parallel strategy which uses multi-CPU to work simultaneously is designed to do atmospheric correction for a multispectral remote sensing image. The parallel framework's flow and the main parallel body of atmospheric correction are described. Then, the multispectral remote sensing image of the Chinese Gaofen-2 satellite is used to test the acceleration efficiency. When the CPU number is increasing from 1 to 8, the computational speed is also increasing. The biggest acceleration rate is 6.5. Under the 8 CPU working mode, the whole image atmospheric correction costs 4 minutes.
A novel method to detect shadows on multispectral images
NASA Astrophysics Data System (ADS)
Daǧlayan Sevim, Hazan; Yardımcı ćetin, Yasemin; Özışık Başkurt, Didem
2016-10-01
Shadowing occurs when the direct light coming from a light source is obstructed by high human made structures, mountains or clouds. Since shadow regions are illuminated only by scattered light, true spectral properties of the objects are not observed in such regions. Therefore, many object classification and change detection problems utilize shadow detection as a preprocessing step. Besides, shadows are useful for obtaining 3D information of the objects such as estimating the height of buildings. With pervasiveness of remote sensing images, shadow detection is ever more important. This study aims to develop a shadow detection method on multispectral images based on the transformation of C1C2C3 space and contribution of NIR bands. The proposed method is tested on Worldview-2 images covering Ankara, Turkey at different times. The new index is used on these 8-band multispectral images with two NIR bands. The method is compared with methods in the literature.
Wetland Vegetation Integrity Assessment with Low Altitude Multispectral Uav Imagery
NASA Astrophysics Data System (ADS)
Boon, M. A.; Tesfamichael, S.
2017-08-01
The use of multispectral sensors on Unmanned Aerial Vehicles (UAVs) was until recently too heavy and bulky although this changed in recent times and they are now commercially available. The focus on the usage of these sensors is mostly directed towards the agricultural sector where the focus is on precision farming. Applications of these sensors for mapping of wetland ecosystems are rare. Here, we evaluate the performance of low altitude multispectral UAV imagery to determine the state of wetland vegetation in a localised spatial area. Specifically, NDVI derived from multispectral UAV imagery was used to inform the determination of the integrity of the wetland vegetation. Furthermore, we tested different software applications for the processing of the imagery. The advantages and disadvantages we experienced of these applications are also shortly presented in this paper. A JAG-M fixed-wing imaging system equipped with a MicaScene RedEdge multispectral camera were utilised for the survey. A single surveying campaign was undertaken in early autumn of a 17 ha study area at the Kameelzynkraal farm, Gauteng Province, South Africa. Structure-from-motion photogrammetry software was used to reconstruct the camera position's and terrain features to derive a high resolution orthoretified mosaic. MicaSense Atlas cloud-based data platform, Pix4D and PhotoScan were utilised for the processing. The WET-Health level one methodology was followed for the vegetation assessment, where wetland health is a measure of the deviation of a wetland's structure and function from its natural reference condition. An on-site evaluation of the vegetation integrity was first completed. Disturbance classes were then mapped using the high resolution multispectral orthoimages and NDVI. The WET-Health vegetation module completed with the aid of the multispectral UAV products indicated that the vegetation of the wetland is largely modified ("D" PES Category) and that the condition is expected to deteriorate (change score) in the future. However a lower impact score were determined utilising the multispectral UAV imagery and NDVI. The result is a more accurate estimation of the impacts in the wetland.
A mixture neural net for multispectral imaging spectrometer processing
NASA Technical Reports Server (NTRS)
Casasent, David; Slagle, Timothy
1990-01-01
Each spatial region viewed by an imaging spectrometer contains various elements in a mixture. The elements present and the amount of each are to be determined. A neural net solution is considered. Initial optical neural net hardware is described. The first simulations on the component requirements of a neural net are considered. The pseudoinverse solution is shown to not suffice, i.e. a neural net solution is required.
A PC-based multispectral scanner data evaluation workstation: Application to Daedalus scanners
NASA Technical Reports Server (NTRS)
Jedlovec, Gary J.; James, Mark W.; Smith, Matthew R.; Atkinson, Robert J.
1991-01-01
In late 1989, a personal computer (PC)-based data evaluation workstation was developed to support post flight processing of Multispectral Atmospheric Mapping Sensor (MAMS) data. The MAMS Quick View System (QVS) is an image analysis and display system designed to provide the capability to evaluate Daedalus scanner data immediately after an aircraft flight. Even in its original form, the QVS offered the portability of a personal computer with the advanced analysis and display features of a mainframe image analysis system. It was recognized, however, that the original QVS had its limitations, both in speed and processing of MAMS data. Recent efforts are presented that focus on overcoming earlier limitations and adapting the system to a new data tape structure. In doing so, the enhanced Quick View System (QVS2) will accommodate data from any of the four spectrometers used with the Daedalus scanner on the NASA ER2 platform. The QVS2 is designed around the AST 486/33 MHz CPU personal computer and comes with 10 EISA expansion slots, keyboard, and 4.0 mbytes of memory. Specialized PC-McIDAS software provides the main image analysis and display capability for the system. Image analysis and display of the digital scanner data is accomplished with PC-McIDAS software.
Multispectral embedding-based deep neural network for three-dimensional human pose recovery
NASA Astrophysics Data System (ADS)
Yu, Jialin; Sun, Jifeng
2018-01-01
Monocular image-based three-dimensional (3-D) human pose recovery aims to retrieve 3-D poses using the corresponding two-dimensional image features. Therefore, the pose recovery performance highly depends on the image representations. We propose a multispectral embedding-based deep neural network (MSEDNN) to automatically obtain the most discriminative features from multiple deep convolutional neural networks and then embed their penultimate fully connected layers into a low-dimensional manifold. This compact manifold can explore not only the optimum output from multiple deep networks but also the complementary properties of them. Furthermore, the distribution of each hierarchy discriminative manifold is sufficiently smooth so that the training process of our MSEDNN can be effectively implemented only using few labeled data. Our proposed network contains a body joint detector and a human pose regressor that are jointly trained. Extensive experiments conducted on four databases show that our proposed MSEDNN can achieve the best recovery performance compared with the state-of-the-art methods.
High-density Schottky barrier IRCCD sensors for remote sensing applications
NASA Astrophysics Data System (ADS)
Elabd, H.; Tower, J. R.; McCarthy, B. M.
1983-01-01
It is pointed out that the ambitious goals envisaged for the next generation of space-borne sensors challenge the state-of-the-art in solid-state imaging technology. Studies are being conducted with the aim to provide focal plane array technology suitable for use in future Multispectral Linear Array (MLA) earth resource instruments. An important new technology for IR-image sensors involves the use of monolithic Schottky barrier infrared charge-coupled device arrays. This technology is suitable for earth sensing applications in which moderate quantum efficiency and intermediate operating temperatures are required. This IR sensor can be fabricated by using standard integrated circuit (IC) processing techniques, and it is possible to employ commercial IC grade silicon. For this reason, it is feasible to construct Schottky barrier area and line arrays with large numbers of elements and high-density designs. A Pd2Si Schottky barrier sensor for multispectral imaging in the 1 to 3.5 micron band is under development.
NASA Astrophysics Data System (ADS)
Kannadorai, Ravi Kumar; Udumala, Sunil Kumar; Sidney, Yu Wing Kwong
2016-12-01
Noninvasive and nonradioactive imaging modality to track and image apoptosis during chemotherapy of triple negative breast cancer is much needed for an effective treatment plan. Phosphatidylserine (PS) is a biomarker transiently exposed on the outer surface of the cells during apoptosis. Its externalization occurs within a few hours of an apoptotic stimulus by a chemotherapy drug and leads to presentation of millions of phospholipid molecules per apoptotic cell on the cell surface. This makes PS an abundant and accessible target for apoptosis imaging. In the current work, we show that PS monoclonal antibody tagged with indocyanine green (ICG) can help to track and image apoptosis using multispectral optoacoustic tomography in vivo. When compared to saline control, the doxorubicin treated group showed a significant increase in uptake of ICG-PS monoclonal antibody in triple negative breast tumor xenografted in NCr nude female mice. Day 5 posttreatment had the highest optoacoustic signal in the tumor region, indicating maximum apoptosis and the tumor subsequently shrank. Since multispectral optoacoustic imaging does not involve the use of radioactivity, the longer the circulatory time of the PS antibody can be exploited to monitor apoptosis over a period of time without multiple injections of commonly used imaging probes such as Tc-99m Annexin V or F-18 ML10. The proposed apoptosis imaging technique involving multispectral optoacoustic tomography, monoclonal antibody, and near-infrared absorbing fluorescent marker can be an effective tool for imaging apoptosis and treatment planning.
NASA Astrophysics Data System (ADS)
Xia, Wenfeng; Nikitichev, Daniil I.; Mari, Jean Martial; West, Simeon J.; Ourselin, Sebastien; Beard, Paul C.; Desjardins, Adrien E.
2015-07-01
Precise and efficient guidance of medical devices is of paramount importance for many minimally invasive procedures. These procedures include fetal interventions, tumor biopsies and treatments, central venous catheterisations and peripheral nerve blocks. Ultrasound imaging is commonly used for guidance, but it often provides insufficient contrast with which to identify soft tissue structures such as vessels, tumors, and nerves. In this study, a hybrid interventional imaging system that combines ultrasound imaging and multispectral photoacoustic imaging for guiding minimally invasive procedures was developed and characterized. The system provides both structural information from ultrasound imaging and molecular information from multispectral photoacoustic imaging. It uses a commercial linear-array ultrasound imaging probe as the ultrasound receiver, with a multimode optical fiber embedded in a needle to deliver pulsed excitation light to tissue. Co-registration of ultrasound and photoacoustic images is achieved with the use of the same ultrasound receiver for both modalities. Using tissue ex vivo, the system successfully discriminated deep-located fat tissue from the surrounding muscle tissue. The measured photoacoustic spectrum of the fat tissue had good agreement with the lipid spectrum in literature.
Classification of human carcinoma cells using multispectral imagery
NASA Astrophysics Data System (ADS)
Ćinar, Umut; Y. Ćetin, Yasemin; Ćetin-Atalay, Rengul; Ćetin, Enis
2016-03-01
In this paper, we present a technique for automatically classifying human carcinoma cell images using textural features. An image dataset containing microscopy biopsy images from different patients for 14 distinct cancer cell line type is studied. The images are captured using a RGB camera attached to an inverted microscopy device. Texture based Gabor features are extracted from multispectral input images. SVM classifier is used to generate a descriptive model for the purpose of cell line classification. The experimental results depict satisfactory performance, and the proposed method is versatile for various microscopy magnification options.
NASA Technical Reports Server (NTRS)
Shimabukuro, Yosio Edemir; Smith, James A.
1991-01-01
Constrained-least-squares and weighted-least-squares mixing models for generating fraction images derived from remote sensing multispectral data are presented. An experiment considering three components within the pixels-eucalyptus, soil (understory), and shade-was performed. The generated fraction images for shade (shade image) derived from these two methods were compared by considering the performance and computer time. The derived shade images are related to the observed variation in forest structure, i.e., the fraction of inferred shade in the pixel is related to different eucalyptus ages.
Remote sensing. [land use mapping
NASA Technical Reports Server (NTRS)
Jinich, A.
1979-01-01
Various imaging techniques are outlined for use in mapping, land use, and land management in Mexico. Among the techniques discussed are pattern recognition and photographic processing. The utilization of information from remote sensing devices on satellites are studied. Multispectral band scanners are examined and software, hardware, and other program requirements are surveyed.
NASA Technical Reports Server (NTRS)
Edgett, Kenneth S.; Anderson, Donald L.
1995-01-01
This paper describes an empirical method to correct TIMS (Thermal Infrared Multispectral Scanner) data for atmospheric effects by transferring calibration from a laboratory thermal emission spectrometer to the TIMS multispectral image. The method does so by comparing the laboratory spectra of samples gathered in the field with TIMS 6-point spectra for pixels at the location of field sampling sites. The transference of calibration also makes it possible to use spectra from the laboratory as endmembers in unmixing studies of TIMS data.
HERCULES/MSI: a multispectral imager with geolocation for STS-70
NASA Astrophysics Data System (ADS)
Simi, Christopher G.; Kindsfather, Randy; Pickard, Henry; Howard, William, III; Norton, Mark C.; Dixon, Roberta
1995-11-01
A multispectral intensified CCD imager combined with a ring laser gyroscope based inertial measurement unit was flown on the Space Shuttle Discovery from July 13-22, 1995 (Space Transport System Flight No. 70, STS-70). The camera includes a six position filter wheel, a third generation image intensifier, and a CCD camera. The camera is integrated with a laser gyroscope system that determines the ground position of the imagery to an accuracy of better than three nautical miles. The camera has two modes of operation; a panchromatic mode for high-magnification imaging [ground sample distance (GSD) of 4 m], or a multispectral mode consisting of six different user-selectable spectral ranges at reduced magnification (12 m GSD). This paper discusses the system hardware and technical trade-offs involved with camera optimization, and presents imagery observed during the shuttle mission.
Bandwidth compression of multispectral satellite imagery
NASA Technical Reports Server (NTRS)
Habibi, A.
1978-01-01
The results of two studies aimed at developing efficient adaptive and nonadaptive techniques for compressing the bandwidth of multispectral images are summarized. These techniques are evaluated and compared using various optimality criteria including MSE, SNR, and recognition accuracy of the bandwidth compressed images. As an example of future requirements, the bandwidth requirements for the proposed Landsat-D Thematic Mapper are considered.
Demosaicking for full motion video 9-band SWIR sensor
NASA Astrophysics Data System (ADS)
Kanaev, Andrey V.; Rawhouser, Marjorie; Kutteruf, Mary R.; Yetzbacher, Michael K.; DePrenger, Michael J.; Novak, Kyle M.; Miller, Corey A.; Miller, Christopher W.
2014-05-01
Short wave infrared (SWIR) spectral imaging systems are vital for Intelligence, Surveillance, and Reconnaissance (ISR) applications because of their abilities to autonomously detect targets and classify materials. Typically the spectral imagers are incapable of providing Full Motion Video (FMV) because of their reliance on line scanning. We enable FMV capability for a SWIR multi-spectral camera by creating a repeating pattern of 3x3 spectral filters on a staring focal plane array (FPA). In this paper we present the imagery from an FMV SWIR camera with nine discrete bands and discuss image processing algorithms necessary for its operation. The main task of image processing in this case is demosaicking of the spectral bands i.e. reconstructing full spectral images with original FPA resolution from spatially subsampled and incomplete spectral data acquired with the choice of filter array pattern. To the best of author's knowledge, the demosaicking algorithms for nine or more equally sampled bands have not been reported before. Moreover all existing algorithms developed for demosaicking visible color filter arrays with less than nine colors assume either certain relationship between the visible colors, which are not valid for SWIR imaging, or presence of one color band with higher sampling rate compared to the rest of the bands, which does not conform to our spectral filter pattern. We will discuss and present results for two novel approaches to demosaicking: interpolation using multi-band edge information and application of multi-frame super-resolution to a single frame resolution enhancement of multi-spectral spatially multiplexed images.
Kahle, A.B.; Rowan, L.C.
1980-01-01
Six channels of moultispectral middle infrared (8 to 14 micrometres) aircraft scanner data were acquired over the East Tintic mining district, Utah. The digital image data were computer processed to create a color-composite image based on principal component transformations. When combined with a visible and near infrared color-composite image from a previous flight, with limited field checking, it is possible to discriminate quartzite, carbonate rocks, quartz latitic and quartz monzonitic rocks, latitic and monzonitic rocks, silicified altered rocks, argillized altered rocks, and vegetation. -from Authors
Deep multi-spectral ensemble learning for electronic cleansing in dual-energy CT colonography
NASA Astrophysics Data System (ADS)
Tachibana, Rie; Näppi, Janne J.; Hironaka, Toru; Kim, Se Hyung; Yoshida, Hiroyuki
2017-03-01
We developed a novel electronic cleansing (EC) method for dual-energy CT colonography (DE-CTC) based on an ensemble deep convolution neural network (DCNN) and multi-spectral multi-slice image patches. In the method, an ensemble DCNN is used to classify each voxel of a DE-CTC image volume into five classes: luminal air, soft tissue, tagged fecal materials, and partial-volume boundaries between air and tagging and those between soft tissue and tagging. Each DCNN acts as a voxel classifier, where an input image patch centered at the voxel is generated as input to the DCNNs. An image patch has three channels that are mapped from a region-of-interest containing the image plane of the voxel and the two adjacent image planes. Six different types of spectral input image datasets were derived using two dual-energy CT images, two virtual monochromatic images, and two material images. An ensemble DCNN was constructed by use of a meta-classifier that combines the output of multiple DCNNs, each of which was trained with a different type of multi-spectral image patches. The electronically cleansed CTC images were calculated by removal of regions classified as other than soft tissue, followed by a colon surface reconstruction. For pilot evaluation, 359 volumes of interest (VOIs) representing sources of subtraction artifacts observed in current EC schemes were sampled from 30 clinical CTC cases. Preliminary results showed that the ensemble DCNN can yield high accuracy in labeling of the VOIs, indicating that deep learning of multi-spectral EC with multi-slice imaging could accurately remove residual fecal materials from CTC images without generating major EC artifacts.
Online quantitative analysis of multispectral images of human body tissues
NASA Astrophysics Data System (ADS)
Lisenko, S. A.
2013-08-01
A method is developed for online monitoring of structural and morphological parameters of biological tissues (haemoglobin concentration, degree of blood oxygenation, average diameter of capillaries and the parameter characterising the average size of tissue scatterers), which involves multispectral tissue imaging, image normalisation to one of its spectral layers and determination of unknown parameters based on their stable regression relation with the spectral characteristics of the normalised image. Regression is obtained by simulating numerically the diffuse reflectance spectrum of the tissue by the Monte Carlo method at a wide variation of model parameters. The correctness of the model calculations is confirmed by the good agreement with the experimental data. The error of the method is estimated under conditions of general variability of structural and morphological parameters of the tissue. The method developed is compared with the traditional methods of interpretation of multispectral images of biological tissues, based on the solution of the inverse problem for each pixel of the image in the approximation of different analytical models.
Enhancement of multispectral thermal infrared images - Decorrelation contrast stretching
NASA Technical Reports Server (NTRS)
Gillespie, Alan R.
1992-01-01
Decorrelation contrast stretching is an effective method for displaying information from multispectral thermal infrared (TIR) images. The technique involves transformation of the data to principle components ('decorrelation'), independent contrast 'stretching' of data from the new 'decorrelated' image bands, and retransformation of the stretched data back to the approximate original axes, based on the inverse of the principle component rotation. The enhancement is robust in that colors of the same scene components are similar in enhanced images of similar scenes, or the same scene imaged at different times. Decorrelation contrast stretching is reviewed in the context of other enhancements applied to TIR images.
NASA Astrophysics Data System (ADS)
Nishidate, Izumi; Ooe, Shintaro; Todoroki, Shinsuke; Asamizu, Erika
2013-05-01
To evaluate the functional pigments in the tomato fruits nondestructively, we propose a method based on the multispectral diffuse reflectance images estimated by the Wiener estimation for a digital RGB image. Each pixel of the multispectral image is converted to the absorbance spectrum and then analyzed by the multiple regression analysis to visualize the contents of chlorophyll a, lycopene and β-carotene. The result confirms the feasibility of the method for in situ imaging of chlorophyll a, β-carotene and lycopene in the tomato fruits.
Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.
2010-01-01
Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475
Multispectral open-air intraoperative fluorescence imaging.
Behrooz, Ali; Waterman, Peter; Vasquez, Kristine O; Meganck, Jeff; Peterson, Jeffrey D; Faqir, Ilias; Kempner, Joshua
2017-08-01
Intraoperative fluorescence imaging informs decisions regarding surgical margins by detecting and localizing signals from fluorescent reporters, labeling targets such as malignant tissues. This guidance reduces the likelihood of undetected malignant tissue remaining after resection, eliminating the need for additional treatment or surgery. The primary challenges in performing open-air intraoperative fluorescence imaging come from the weak intensity of the fluorescence signal in the presence of strong surgical and ambient illumination, and the auto-fluorescence of non-target components, such as tissue, especially in the visible spectral window (400-650 nm). In this work, a multispectral open-air fluorescence imaging system is presented for translational image-guided intraoperative applications, which overcomes these challenges. The system is capable of imaging weak fluorescence signals with nanomolar sensitivity in the presence of surgical illumination. This is done using synchronized fluorescence excitation and image acquisition with real-time background subtraction. Additionally, the system uses a liquid crystal tunable filter for acquisition of multispectral images that are used to spectrally unmix target fluorescence from non-target auto-fluorescence. Results are validated by preclinical studies on murine models and translational canine oncology models.
Development of a multispectral imagery device devoted to weed detection
NASA Astrophysics Data System (ADS)
Vioix, Jean-Baptiste; Douzals, Jean-Paul; Truchetet, Frederic; Navar, Pierre
2003-04-01
Multispectral imagery is a large domain with number of practical applications: thermography, quality control in industry, food science and agronomy, etc. The main interest is to obtain spectral information of the objects for which reflectance signal can be associated with physical, chemical and/or biological properties. Agronomic applications of multispectral imagery generally involve the acquisition of several images in the wavelengths of visible and near infrared. This paper will first present different kind of multispectral devices used for agronomic issues and will secondly introduce an original multispectral design based on a single CCD. Third, early results obtained for weed detection are presented.
NASA Astrophysics Data System (ADS)
Behrooz, Ali; Vasquez, Kristine O.; Waterman, Peter; Meganck, Jeff; Peterson, Jeffrey D.; Miller, Peter; Kempner, Joshua
2017-02-01
Intraoperative resection of tumors currently relies upon the surgeon's ability to visually locate and palpate tumor nodules. Undetected residual malignant tissue often results in the need for additional treatment or surgical intervention. The Solaris platform is a multispectral open-air fluorescence imaging system designed for translational fluorescence-guided surgery. Solaris supports video-rate imaging in four fixed fluorescence channels ranging from visible to near infrared, and a multispectral channel equipped with a liquid crystal tunable filter (LCTF) for multispectral image acquisition (520-620 nm). Identification of tumor margins using reagents emitting in the visible spectrum (400-650 nm), such as fluorescein isothiocyanate (FITC), present challenges considering the presence of auto-fluorescence from tissue and food in the gastrointestinal (GI) tract. To overcome this, Solaris acquires LCTF-based multispectral images, and by applying an automated spectral unmixing algorithm to the data, separates reagent fluorescence from tissue and food auto-fluorescence. The unmixing algorithm uses vertex component analysis to automatically extract the primary pure spectra, and resolves the reagent fluorescent signal using non-negative least squares. For validation, intraoperative in vivo studies were carried out in tumor-bearing rodents injected with FITC-dextran reagent that is primarily residing in malignant tissue 24 hours post injection. In the absence of unmixing, fluorescence from tumors is not distinguishable from that of surrounding tissue. Upon spectral unmixing, the FITC-labeled malignant regions become well defined and detectable. The results of these studies substantiate the multispectral power of Solaris in resolving FITC-based agent signal in deep tumor masses, under ambient and surgical light, and enhancing the ability to surgically resect them.
Globally scalable generation of high-resolution land cover from multispectral imagery
NASA Astrophysics Data System (ADS)
Stutts, S. Craig; Raskob, Benjamin L.; Wenger, Eric J.
2017-05-01
We present an automated method of generating high resolution ( 2 meter) land cover using a pattern recognition neural network trained on spatial and spectral features obtained from over 9000 WorldView multispectral images (MSI) in six distinct world regions. At this resolution, the network can classify small-scale objects such as individual buildings, roads, and irrigation ponds. This paper focuses on three key areas. First, we describe our land cover generation process, which involves the co-registration and aggregation of multiple spatially overlapping MSI, post-aggregation processing, and the registration of land cover to OpenStreetMap (OSM) road vectors using feature correspondence. Second, we discuss the generation of land cover derivative products and their impact in the areas of region reduction and object detection. Finally, we discuss the process of globally scaling land cover generation using cloud computing via Amazon Web Services (AWS).
Simulation of the hyperspectral data from multispectral data using Python programming language
NASA Astrophysics Data System (ADS)
Tiwari, Varun; Kumar, Vinay; Pandey, Kamal; Ranade, Rigved; Agarwal, Shefali
2016-04-01
Multispectral remote sensing (MRS) sensors have proved their potential in acquiring and retrieving information of Land Use Land (LULC) Cover features in the past few decades. These MRS sensor generally acquire data within limited broad spectral bands i.e. ranging from 3 to 10 number of bands. The limited number of bands and broad spectral bandwidth in MRS sensors becomes a limitation in detailed LULC studies as it is not capable of distinguishing spectrally similar LULC features. On the counterpart, fascinating detailed information available in hyperspectral (HRS) data is spectrally over determined and able to distinguish spectrally similar material of the earth surface. But presently the availability of HRS sensors is limited. This is because of the requirement of sensitive detectors and large storage capability, which makes the acquisition and processing cumbersome and exorbitant. So, there arises a need to utilize the available MRS data for detailed LULC studies. Spectral reconstruction approach is one of the technique used for simulating hyperspectral data from available multispectral data. In the present study, spectral reconstruction approach is utilized for the simulation of hyperspectral data using EO-1 ALI multispectral data. The technique is implemented using python programming language which is open source in nature and possess support for advanced imaging processing libraries and utilities. Over all 70 bands have been simulated and validated using visual interpretation, statistical and classification approach.
Bautista, Pinky A; Yagi, Yukako
2012-05-01
Hematoxylin and eosin (H&E) stain is currently the most popular for routine histopathology staining. Special and/or immuno-histochemical (IHC) staining is often requested to further corroborate the initial diagnosis on H&E stained tissue sections. Digital simulation of staining (or digital staining) can be a very valuable tool to produce the desired stained images from the H&E stained tissue sections instantaneously. We present an approach to digital staining of histopathology multispectral images by combining the effects of spectral enhancement and spectral transformation. Spectral enhancement is accomplished by shifting the N-band original spectrum of the multispectral pixel with the weighted difference between the pixel's original and estimated spectrum; the spectrum is estimated using M < N principal component (PC) vectors. The pixel's enhanced spectrum is transformed to the spectral configuration associated to its reaction to a specific stain by utilizing an N × N transformation matrix, which is derived through application of least mean squares method to the enhanced and target spectral transmittance samples of the different tissue components found in the image. Results of our experiments on the digital conversion of an H&E stained multispectral image to its Masson's trichrome stained equivalent show the viability of the method.
Kainerstorfer, Jana M.; Polizzotto, Mark N.; Uldrick, Thomas S.; Rahman, Rafa; Hassan, Moinuddin; Najafizadeh, Laleh; Ardeshirpour, Yasaman; Wyvill, Kathleen M.; Aleman, Karen; Smith, Paul D.; Yarchoan, Robert; Gandjbakhche, Amir H.
2013-01-01
Diffuse multi-spectral imaging has been evaluated as a potential non-invasive marker of tumor response. Multi-spectral images of Kaposi sarcoma skin lesions were taken over the course of treatment, and blood volume and oxygenation concentration maps were obtained through principal component analysis (PCA) of the data. These images were compared with clinical and pathological responses determined by conventional means. We demonstrate that cutaneous lesions have increased blood volume concentration and that changes in this parameter are a reliable indicator of treatment efficacy, differentiating responders and non-responders. Blood volume decreased by at least 20% in all lesions that responded by clinical criteria and increased in the two lesions that did not respond clinically. Responses as assessed by multi-spectral imaging also generally correlated with overall patient clinical response assessment, were often detectable earlier in the course of therapy, and are less subject to observer variability than conventional clinical assessment. Tissue oxygenation was more variable, with lesions often showing decreased oxygenation in the center surrounded by a zone of increased oxygenation. This technique could potentially be a clinically useful supplement to existing response assessment in KS, providing an early, quantitative, and non-invasive marker of treatment effect. PMID:24386302
NASA Technical Reports Server (NTRS)
Polcyn, F. C.; Thomson, F. J.; Porcello, L. J.; Sattinger, I. J.; Malila, W. A.; Wezernak, C. T.; Horvath, R.; Vincent, R. K. (Principal Investigator); Bryan, M. L.
1972-01-01
There are no author-identified significant results in this report. Remotely sensed multispectral scanner and return beam vidicon imagery from ERTS-1 is being used for: (1) water depth measurements in the Virgin Islands and Upper Lake Michigan areas; (2) mapping of the Yellowstone National Park; (3) assessment of atmospheric effects in Colorado; (4) lake ice surveillance in Canada and Great Lakes areas; (5) recreational land use in Southeast Michigan; (6) International Field Year on the Great Lakes investigations of Lake Ontario; (7) image enhancement of multispectral scanner data using existing techniques; (8) water quality monitoring of the New York Bight, Tampa Bay, Lake Michigan, Santa Barbara Channel, and Lake Erie; (9) oil pollution detection in the Chesapeake Bay, Gulf of Mexico southwest of New Orleans, and Santa Barbara Channel; and (10) mapping iron compounds in the Wind River Mountains.
Multispectral Photogrammetric Data Acquisition and Processing Forwall Paintings Studies
NASA Astrophysics Data System (ADS)
Pamart, A.; Guillon, O.; Faraci, S.; Gattet, E.; Genevois, M.; Vallet, J. M.; De Luca, L.
2017-02-01
In the field of wall paintings studies different imaging techniques are commonly used for the documentation and the decision making in term of conservation and restoration. There is nowadays some challenging issues to merge scientific imaging techniques in a multimodal context (i.e. multi-sensors, multi-dimensions, multi-spectral and multi-temporal approaches). For decades those CH objects has been widely documented with Technical Photography (TP) which gives precious information to understand or retrieve the painting layouts and history. More recently there is an increasing demand of the use of digital photogrammetry in order to provide, as one of the possible output, an orthophotomosaic which brings a possibility for metrical quantification of conservators/restorators observations and actions planning. This paper presents some ongoing experimentations of the LabCom MAP-CICRP relying on the assumption that those techniques can be merged through a common pipeline to share their own benefits and create a more complete documentation.
Holkenbrink, Patrick F.
1978-01-01
Landsat data are received by National Aeronautics and Space Administration (NASA) tracking stations and converted into digital form on high-density tapes (HDTs) by the Image Processing Facility (IPF) at the Goddard Space Flight Center (GSFC), Greenbelt, Maryland. The HDTs are shipped to the EROS Data Center (EDC) where they are converted into customer products by the EROS Data Center digital image processing system (EDIPS). This document describes in detail one of these products: the computer-compatible tape (CCT) produced from Landsat-1, -2, and -3 multispectral scanner (MSS) data and Landsat-3 only return-beam vidicon (RBV) data. Landsat-1 and -2 RBV data will not be processed by IPF/EDIPS to CCT format.
NASA Astrophysics Data System (ADS)
Taruttis, Adrian; Herzog, Eva; Razansky, Daniel; Ntziachristos, Vasilis
2011-03-01
Multispectral Optoacoustic Tomography (MSOT) is an emerging technique for high resolution macroscopic imaging with optical and molecular contrast. We present cardiovascular imaging results from a multi-element real-time MSOT system recently developed for studies on small animals. Anatomical features relevant to cardiovascular disease, such as the carotid arteries, the aorta and the heart, are imaged in mice. The system's fast acquisition time, in tens of microseconds, allows images free of motion artifacts from heartbeat and respiration. Additionally, we present in-vivo detection of optical imaging agents, gold nanorods, at high spatial and temporal resolution, paving the way for molecular imaging applications.
Airborne multispectral detection of regrowth cotton fields
NASA Astrophysics Data System (ADS)
Westbrook, John K.; Suh, Charles P.-C.; Yang, Chenghai; Lan, Yubin; Eyster, Ritchie S.
2015-01-01
Effective methods are needed for timely areawide detection of regrowth cotton plants because boll weevils (a quarantine pest) can feed and reproduce on these plants beyond the cotton production season. Airborne multispectral images of regrowth cotton plots were acquired on several dates after three shredding (i.e., stalk destruction) dates. Linear spectral unmixing (LSU) classification was applied to high-resolution airborne multispectral images of regrowth cotton plots to estimate the minimum detectable size and subsequent growth of plants. We found that regrowth cotton fields can be identified when the mean plant width is ˜0.2 m for an image resolution of 0.1 m. LSU estimates of canopy cover of regrowth cotton plots correlated well (r2=0.81) with the ratio of mean plant width to row spacing, a surrogate measure of plant canopy cover. The height and width of regrowth plants were both well correlated (r2=0.94) with accumulated degree-days after shredding. The results will help boll weevil eradication program managers use airborne multispectral images to detect and monitor the regrowth of cotton plants after stalk destruction, and identify fields that may require further inspection and mitigation of boll weevil infestations.
Perceptual evaluation of color transformed multispectral imagery
NASA Astrophysics Data System (ADS)
Toet, Alexander; de Jong, Michael J.; Hogervorst, Maarten A.; Hooge, Ignace T. C.
2014-04-01
Color remapping can give multispectral imagery a realistic appearance. We assessed the practical value of this technique in two observer experiments using monochrome intensified (II) and long-wave infrared (IR) imagery, and color daylight (REF) and fused multispectral (CF) imagery. First, we investigated the amount of detail observers perceive in a short timespan. REF and CF imagery yielded the highest precision and recall measures, while II and IR imagery yielded significantly lower values. This suggests that observers have more difficulty in extracting information from monochrome than from color imagery. Next, we measured eye fixations during free image exploration. Although the overall fixation behavior was similar across image modalities, the order in which certain details were fixated varied. Persons and vehicles were typically fixated first in REF, CF, and IR imagery, while they were fixated later in II imagery. In some cases, color remapping II imagery and fusion with IR imagery restored the fixation order of these image details. We conclude that color remapping can yield enhanced scene perception compared to conventional monochrome nighttime imagery, and may be deployed to tune multispectral image representations such that the resulting fixation behavior resembles the fixation behavior corresponding to daylight color imagery.
High performance multi-spectral interrogation for surface plasmon resonance imaging sensors.
Sereda, A; Moreau, J; Canva, M; Maillart, E
2014-04-15
Surface plasmon resonance (SPR) sensing has proven to be a valuable tool in the field of surface interactions characterization, especially for biomedical applications where label-free techniques are of particular interest. In order to approach the theoretical resolution limit, most SPR-based systems have turned to either angular or spectral interrogation modes, which both offer very accurate real-time measurements, but at the expense of the 2-dimensional imaging capability, therefore decreasing the data throughput. In this article, we show numerically and experimentally how to combine the multi-spectral interrogation technique with 2D-imaging, while finding an optimum in terms of resolution, accuracy, acquisition speed and reduction in data dispersion with respect to the classical reflectivity interrogation mode. This multi-spectral interrogation methodology is based on a robust five parameter fitting of the spectral reflectivity curve which enables monitoring of the reflectivity spectral shift with a resolution of the order of ten picometers, and using only five wavelength measurements per point. In fine, such multi-spectral based plasmonic imaging system allows biomolecular interaction monitoring in a linear regime independently of variations of buffer optical index, which is illustrated on a DNA-DNA model case. © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Volkov, Boris; Mathews, Marlon S.; Abookasis, David
2015-03-01
Multispectral imaging has received significant attention over the last decade as it integrates spectroscopy, imaging, tomography analysis concurrently to acquire both spatial and spectral information from biological tissue. In the present study, a multispectral setup based on projection of structured illumination at several near-infrared wavelengths and at different spatial frequencies is applied to quantitatively assess brain function before, during, and after the onset of traumatic brain injury in an intact mouse brain (n=5). For the production of head injury, we used the weight drop method where weight of a cylindrical metallic rod falling along a metal tube strikes the mouse's head. Structured light was projected onto the scalp surface and diffuse reflected light was recorded by a CCD camera positioned perpendicular to the mouse head. Following data analysis, we were able to concurrently show a series of hemodynamic and morphologic changes over time including higher deoxyhemoglobin, reduction in oxygen saturation, cell swelling, etc., in comparison with baseline measurements. Overall, results demonstrates the capability of multispectral imaging based structured illumination to detect and map of brain tissue optical and physiological properties following brain injury in a simple noninvasive and noncontact manner.
The Multispectral Imaging Science Working Group. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Cox, S. C. (Editor)
1982-01-01
Results of the deliberations of the six multispectral imaging science working groups (Botany, Geography, Geology, Hydrology, Imaging Science and Information Science) are summarized. Consideration was given to documenting the current state of knowledge in terrestrial remote sensing without the constraints of preconceived concepts such as possible band widths, number of bands, and radiometric or spatial resolutions of present or future systems. The findings of each working group included a discussion of desired capabilities and critical developmental issues.
Retinal oxygen saturation evaluation by multi-spectral fundus imaging
NASA Astrophysics Data System (ADS)
Khoobehi, Bahram; Ning, Jinfeng; Puissegur, Elise; Bordeaux, Kimberly; Balasubramanian, Madhusudhanan; Beach, James
2007-03-01
Purpose: To develop a multi-spectral method to measure oxygen saturation of the retina in the human eye. Methods: Five Cynomolgus monkeys with normal eyes were anesthetized with intramuscular ketamine/xylazine and intravenous pentobarbital. Multi-spectral fundus imaging was performed in five monkeys with a commercial fundus camera equipped with a liquid crystal tuned filter in the illumination light path and a 16-bit digital camera. Recording parameters were controlled with software written specifically for the application. Seven images at successively longer oxygen-sensing wavelengths were recorded within 4 seconds. Individual images for each wavelength were captured in less than 100 msec of flash illumination. Slightly misaligned images of separate wavelengths due to slight eye motion were registered and corrected by translational and rotational image registration prior to analysis. Numerical values of relative oxygen saturation of retinal arteries and veins and the underlying tissue in between the artery/vein pairs were evaluated by an algorithm previously described, but which is now corrected for blood volume from averaged pixels (n > 1000). Color saturation maps were constructed by applying the algorithm at each image pixel using a Matlab script. Results: Both the numerical values of relative oxygen saturation and the saturation maps correspond to the physiological condition, that is, in a normal retina, the artery is more saturated than the tissue and the tissue is more saturated than the vein. With the multi-spectral fundus camera and proper registration of the multi-wavelength images, we were able to determine oxygen saturation in the primate retinal structures on a tolerable time scale which is applicable to human subjects. Conclusions: Seven wavelength multi-spectral imagery can be used to measure oxygen saturation in retinal artery, vein, and tissue (microcirculation). This technique is safe and can be used to monitor oxygen uptake in humans. This work is original and is not under consideration for publication elsewhere.
Snow Cover Mapping and Ice Avalanche Monitoring from the Satellite Data of the Sentinels
NASA Astrophysics Data System (ADS)
Wang, S.; Yang, B.; Zhou, Y.; Wang, F.; Zhang, R.; Zhao, Q.
2018-04-01
In order to monitor ice avalanches efficiently under disaster emergency conditions, a snow cover mapping method based on the satellite data of the Sentinels is proposed, in which the coherence and backscattering coefficient image of Synthetic Aperture Radar (SAR) data (Sentinel-1) is combined with the atmospheric correction result of multispectral data (Sentinel-2). The coherence image of the Sentinel-1 data could be segmented by a certain threshold to map snow cover, with the water bodies extracted from the backscattering coefficient image and removed from the coherence segment result. A snow confidence map from Sentinel-2 was used to map the snow cover, in which the confidence values of the snow cover were relatively high. The method can make full use of the acquired SAR image and multispectral image under emergency conditions, and the application potential of Sentinel data in the field of snow cover mapping is exploited. The monitoring frequency can be ensured because the areas obscured by thick clouds are remedied in the monitoring results. The Kappa coefficient of the monitoring results is 0.946, and the data processing time is less than 2 h, which meet the requirements of disaster emergency monitoring.
An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images.
Gumaei, Abdu; Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed
2018-05-15
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang's method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used.
An automatic agricultural zone classification procedure for crop inventory satellite images
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Kux, H. J.; Velasco, F. R. D.; Deoliveira, M. O. B.
1982-01-01
A classification procedure for assessing crop areal proportion in multispectral scanner image is discussed. The procedure is into four parts: labeling; classification; proportion estimation; and evaluation. The procedure also has the following characteristics: multitemporal classification; the need for a minimum field information; and verification capability between automatic classification and analyst labeling. The processing steps and the main algorithms involved are discussed. An outlook on the future of this technology is also presented.
Monitoring Forest Regrowth Using a Multi-Platform Time Series
NASA Technical Reports Server (NTRS)
Sabol, Donald E., Jr.; Smith, Milton O.; Adams, John B.; Gillespie, Alan R.; Tucker, Compton J.
1996-01-01
Over the past 50 years, the forests of western Washington and Oregon have been extensively harvested for timber. This has resulted in a heterogeneous mosaic of remaining mature forests, clear-cuts, new plantations, and second-growth stands that now occur in areas that formerly were dominated by extensive old-growth forests and younger forests resulting from fire disturbance. Traditionally, determination of seral stage and stand condition have been made using aerial photography and spot field observations, a methodology that is not only time- and resource-intensive, but falls short of providing current information on a regional scale. These limitations may be solved, in part, through the use of multispectral images which can cover large areas at spatial resolutions in the order of tens of meters. The use of multiple images comprising a time series potentially can be used to monitor land use (e.g. cutting and replanting), and to observe natural processes such as regeneration, maturation and phenologic change. These processes are more likely to be spectrally observed in a time series composed of images taken during different seasons over a long period of time. Therefore, for many areas, it may be necessary to use a variety of images taken with different imaging systems. A common framework for interpretation is needed that reduces topographic, atmospheric, instrumental, effects as well as differences in lighting geometry between images. The present state of remote-sensing technology in general use does not realize the full potential of the multispectral data in areas of high topographic relief. For example, the primary method for analyzing images of forested landscapes in the Northwest has been with statistical classifiers (e.g. parallelepiped, nearest-neighbor, maximum likelihood, etc.), often applied to uncalibrated multispectral data. Although this approach has produced useful information from individual images in some areas, landcover classes defined by these techniques typically are not consistent for the same scene imaged under different illumination conditions, especially in the mountainous regions. In addition, it is difficult to correct for atmospheric and instrumental differences between multiple scenes in a time series. In this paper, we present an approach for monitoring forest cutting/regrowth in a semi-mountainous portion of the southern Gifford Pinchot National Forest using a multisensor-time series composed of MSS, TM, and AVIRIS images.
CRISM's Global Mapping of Mars, Part 3
NASA Technical Reports Server (NTRS)
2007-01-01
After a year in Mars orbit, CRISM has taken enough images to allow the team to release the first parts of a global spectral map of Mars to the Planetary Data System (PDS), NASA's digital library of planetary data. CRISM's global mapping is called the 'multispectral survey.' The team uses the word 'survey' because a reason for gathering this data set is to search for new sites for targeted observations, high-resolution views of the surface at 18 meters per pixel in 544 colors. Another reason for the multispectral survey is to provide contextual information. Targeted observations have such a large data volume (about 200 megabytes apiece) that only about 1% of Mars can be imaged at CRISM's highest resolution. The multispectral survey is a lower data volume type of observation that fills in the gaps between targeted observations, allowing scientists to better understand their geologic context. The global map is built from tens of thousands of image strips each about 10 kilometers (6.2 miles) wide and thousands of kilometers long. During the multispectral survey, CRISM returns data from only 72 carefully selected wavelengths that cover absorptions indicative of the mineral groups that CRISM is looking for on Mars. Data volume is further decreased by binning image pixels inside the instrument to a scale of about 200 meters (660 feet) per pixel. The total reduction in data volume per square kilometer is a factor of 700, making the multispectral survey manageable to acquire and transmit to Earth. Once on the ground, the strips of data are mosaicked into maps. The multispectral survey is too large to show the whole planet in a single map, so the map is divided into 1,964 'tiles,' each about 300 kilometers (186 miles) across. There are three versions of each tile, processed to progressively greater levels to strip away the obscuring effects of the dusty atmosphere and to highlight mineral variations in surface materials. This is the third and most processed version of tile 750, showing a part of Mars called Tyrrhena Terra in the ancient, heavily cratered highlands. The colored strips are CRISM multispectral survey data acquired over several months, in which each pixel began as calibrated 72-color spectrum of Mars. An experimental correction for illumination and atmospheric effects was applied to the data, to show how Mars' surface would appear if each strip was imaged with the same illumination and without an atmosphere. Then, the spectrum for each pixel was transformed into a set of 'summary parameters,' which indicate absorptions showing the presence of different minerals. Detections of the igneous, iron-bearing minerals olivine and pyroxene are shown in the red and blue image planes, respectively. Clay-like minerals called phyllosilicates, which formed when liquid water altered the igneous rocks, are shown in the green image plane. The gray areas between the strips are from an earlier mosaic of the planet taken by the Thermal Emission Imaging System (THEMIS) instrument on Mars Odyssey, and are included for context. Note that most areas imaged by CRISM contain pyroxene, and that olivine-containing rocks are concentrated on smooth deposits that fill some crater floors and the low areas between craters. Phyllosilicate-containing rocks are concentrated in and around small craters, such as the one at 13 degrees south latitude, 97 degrees east longitude. Their concentration in crater materials suggests that they were excavated when the craters formed, from a layer that was buried by the younger, less altered, olivine- and pyroxene-containing rocks. CRISM is one of six science instruments on NASA's Mars Reconnaissance Orbiter. Led by The Johns Hopkins University Applied Physics Laboratory, Laurel, Md., the CRISM team includes expertise from universities, government agencies and small businesses in the United States and abroad. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Mars Reconnaissance Orbiter and the Mars Science Laboratory for NASA's Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, built the orbiter.NASA Astrophysics Data System (ADS)
Chernomyrdin, Nikita V.; Zaytsev, Kirill I.; Lesnichaya, Anastasiya D.; Kudrin, Konstantin G.; Cherkasova, Olga P.; Kurlov, Vladimir N.; Shikunova, Irina A.; Perchik, Alexei V.; Yurchenko, Stanislav O.; Reshetov, Igor V.
2016-09-01
In present paper, an ability to differentiate basal cell carcinoma (BCC) and healthy skin by combining multi-spectral autofluorescence imaging, principle component analysis (PCA), and linear discriminant analysis (LDA) has been demonstrated. For this purpose, the experimental setup, which includes excitation and detection branches, has been assembled. The excitation branch utilizes a mercury arc lamp equipped with a 365-nm narrow-linewidth excitation filter, a beam homogenizer, and a mechanical chopper. The detection branch employs a set of bandpass filters with the central wavelength of spectral transparency of λ = 400, 450, 500, and 550 nm, and a digital camera. The setup has been used to study three samples of freshly excised BCC. PCA and LDA have been implemented to analyze the data of multi-spectral fluorescence imaging. Observed results of this pilot study highlight the advantages of proposed imaging technique for skin cancer diagnosis.
Mapping lipid and collagen by multispectral photoacoustic imaging of chemical bond vibration
NASA Astrophysics Data System (ADS)
Wang, Pu; Wang, Ping; Wang, Han-Wei; Cheng, Ji-Xin
2012-09-01
Photoacoustic microscopy using vibrational overtone absorption as a contrast mechanism allows bond-selective imaging of deep tissues. Due to the spectral similarity of molecules in the region of overtone vibration, it is difficult to interrogate chemical components using photoacoustic signal at single excitation wavelength. Here we demonstrate that lipids and collagen, two critical markers for many kinds of diseases, can be distinguished by multispectral photoacoustic imaging of the first overtone of C-H bond. A phantom consisting of rat-tail tendon and fat was constructed to demonstrate this technique. Wavelengths between 1650 and 1850 nm were scanned to excite both the first overtone and combination bands of C-H bonds. B-scan multispectral photoacoustic images, in which each pixel contains a spectrum, were analyzed by a multivariate curve resolution-alternating least squares algorithm to recover the spatial distribution of collagen and lipids in the phantom.
Mapping lipid and collagen by multispectral photoacoustic imaging of chemical bond vibration.
Wang, Pu; Wang, Ping; Wang, Han-Wei; Cheng, Ji-Xin
2012-09-01
Photoacoustic microscopy using vibrational overtone absorption as a contrast mechanism allows bond-selective imaging of deep tissues. Due to the spectral similarity of molecules in the region of overtone vibration, it is difficult to interrogate chemical components using photoacoustic signal at single excitation wavelength. Here we demonstrate that lipids and collagen, two critical markers for many kinds of diseases, can be distinguished by multispectral photoacoustic imaging of the first overtone of C-H bond. A phantom consisting of rat-tail tendon and fat was constructed to demonstrate this technique. Wavelengths between 1650 and 1850 nm were scanned to excite both the first overtone and combination bands of C-H bonds. B-scan multispectral photoacoustic images, in which each pixel contains a spectrum, were analyzed by a multivariate curve resolution-alternating least squares algorithm to recover the spatial distribution of collagen and lipids in the phantom.
NASA Technical Reports Server (NTRS)
Settle, M.; Adams, J.
1982-01-01
Improved orbital imaging capabilities from the standpoint of different scientific disciplines, such as geology, botany, hydrology, and geography were evaluated. A discussion on how geologists might exploit the anticipated measurement capabilities of future orbital imaging systems to discriminate and characterize different types of geologic materials exposed at the Earth's surface is presented. Principle objectives are to summarize past accomplishments in the use of multispectral imaging techniques for lithologic mapping; to identify critical gaps in earlier research efforts that currently limit the ability to extract useful information about the physical and chemical characteristics of geological materials from orbital multispectral surveys; and to define major thresholds, resolution and sensitivity within the visible and infrared portions of the electromagnetic spectrum which, if achieved would result in significant improvement in our ability to discriminate and characterize different geological materials exposed at the Earth's surface.
Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.
Park, Chulhee; Kang, Moon Gi
2016-05-18
A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.
Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition
Park, Chulhee; Kang, Moon Gi
2016-01-01
A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381
METEOSAT studies of clouds and radiation budget
NASA Technical Reports Server (NTRS)
Saunders, R. W.
1982-01-01
Radiation budget studies of the atmosphere/surface system from Meteosat, cloud parameter determination from space, and sea surface temperature measurements from TIROS N data are all described. This work was carried out on the interactive planetary image processing system (IPIPS), which allows interactive manipulationion of the image data in addition to the conventional computational tasks. The current hardware configuration of IPIPS is shown. The I(2)S is the principal interactive display allowing interaction via a trackball, four buttons under program control, or a touch tablet. Simple image processing operations such as contrast enhancing, pseudocoloring, histogram equalization, and multispectral combinations, can all be executed at the push of a button.
Enhancement of time images for photointerpretation
NASA Technical Reports Server (NTRS)
Gillespie, A. R.
1986-01-01
The Thermal Infrared Multispectral Scanner (TIMS) images consist of six channels of data acquired in bands between 8 and 12 microns, thus they contain information about both temperature and emittance. Scene temperatures are controlled by reflectivity of the surface, but also by its geometry with respect to the Sun, time of day, and other factors unrelated to composition. Emittance is dependent upon composition alone. Thus the photointerpreter may wish to enhance emittance information selectively. Because thermal emittances in real scenes vary but little, image data tend to be highly correlated along channels. Special image processing is required to make this information available for the photointerpreter. Processing includes noise removal, construction of model emittance images, and construction of false-color pictures enhanced by decorrelation techniques.
Black, Robert W.; Haggland, Alan; Crosby, Greg
2003-01-01
Instream hydraulic and riparian habitat conditions and stream temperatures were characterized for selected stream segments in the Upper White River Basin, Washington. An aerial multispectral imaging system used digital cameras to photograph the stream segments across multiple wavelengths to characterize fish habitat and temperature conditions. All imageries were georeferenced. Fish habitat features were photographed at a resolution of 0.5 meter and temperature imageries were photographed at a 1.0-meter resolution. The digital multispectral imageries were classified using commercially available software. Aerial photographs were taken on September 21, 1999. Field habitat data were collected from August 23 to October 12, 1999, to evaluate the measurement accuracy and effectiveness of the multispectral imaging in determining the extent of the instream habitat variables. Fish habitat types assessed by this method were the abundance of instream hydraulic features such as pool and riffle habitats, turbulent and non-turbulent habitats, riparian composition, the abundance of large woody debris in the stream and riparian zone, and stream temperatures. Factors such as the abundance of instream woody debris, the location and frequency of pools, and stream temperatures generally are known to have a significant impact on salmon. Instream woody debris creates the habitat complexity necessary to maintain a diverse and healthy salmon population. The abundance of pools is indicative of a stream's ability to support fish and other aquatic organisms. Changes in water temperature can affect aquatic organisms by altering metabolic rates and oxygen requirements, altering their sensitivity to toxic materials and affecting their ability to avoid predators. The specific objectives of this project were to evaluate the use of an aerial multispectral imaging system to accurately identify instream hydraulic features and surface-water temperatures in the Upper White River Basin, to use the multispectral system to help establish baseline instream/riparian habitat conditions in the study area, and to qualitatively assess the imaging system for possible use in other Puget Sound rivers. For the most part, all multispectral imagery-based estimates of total instream riffle and pool area were less than field measurements. The imagery-based estimates for riffle habitat area ranged from 35.5 to 83.3 percent less than field measurements. Pool habitat estimates ranged from 139.3 percent greater than field measurements to 94.0 percent less than field measurements. Multispectral imagery-based estimates of turbulent habitat conditions ranged from 9.3 percent greater than field measurements to 81.6 percent less than field measurements. Multispectral imagery-based estimates of non-turbulent habitat conditions ranged from 27.7 to 74.1 percent less than field measurements. The absolute average percentage of difference between field and imagery-based habitat type areas was less for the turbulent and non-turbulent habitat type categories than for pools and riffles. The estimate of woody debris by multispectral imaging was substantially different than field measurements; percentage of differences ranged from +373.1 to -100 percent. Although the total area of riffles, pools, and turbulent and non-turbulent habitat types measured in the field were all substantially higher than those estimated from the multispectral imagery, the percentage of composition of each habitat type was not substantially different between the imagery-based estimates and field measurements.
NASA Astrophysics Data System (ADS)
Broderson, D.; Dierking, C.; Stevens, E.; Heinrichs, T. A.; Cherry, J. E.
2016-12-01
The Geographic Information Network of Alaska (GINA) at the University of Alaska Fairbanks (UAF) uses two direct broadcast antennas to receive data from a number of polar-orbiting weather satellites, including the Suomi National Polar Partnership (S-NPP) satellite. GINA uses data from S-NPP's Visible Infrared Imaging Radiometer Suite (VIIRS) to generate a variety of multispectral imagery products developed with the needs of the National Weather Service operational meteorologist in mind. Multispectral products have two primary advantages over single-channel products. First, they can more clearly highlight some terrain and meteorological features which are less evident in the component single channels. Second, multispectral present the information from several bands through just one image, thereby sparing the meteorologist unnecessary time interrogating the component single bands individually. With 22 channels available from the VIIRS instrument, the number of possible multispectral products is theoretically huge. A small number of products will be emphasized in this presentation, with the products chosen based on their proven utility in the forecasting environment. Multispectral products can be generated upstream of the end user or by the end user at their own workstation. The advantage and disadvantages of both approaches will be outlined. Lastly, the technique of improving the appearance of multispectral imagery by correcting for atmospheric reflectance at the shorter wavelengths will be described.
Klukkert, Marten; Wu, Jian X; Rantanen, Jukka; Carstensen, Jens M; Rades, Thomas; Leopold, Claudia S
2016-07-30
Monitoring of tablet quality attributes in direct vicinity of the production process requires analytical techniques that allow fast, non-destructive, and accurate tablet characterization. The overall objective of this study was to investigate the applicability of multispectral UV imaging as a reliable, rapid technique for estimation of the tablet API content and tablet hardness, as well as determination of tablet intactness and the tablet surface density profile. One of the aims was to establish an image analysis approach based on multivariate image analysis and pattern recognition to evaluate the potential of UV imaging for automatized quality control of tablets with respect to their intactness and surface density profile. Various tablets of different composition and different quality regarding their API content, radial tensile strength, intactness, and surface density profile were prepared using an eccentric as well as a rotary tablet press at compression pressures from 20MPa up to 410MPa. It was found, that UV imaging can provide both, relevant information on chemical and physical tablet attributes. The tablet API content and radial tensile strength could be estimated by UV imaging combined with partial least squares analysis. Furthermore, an image analysis routine was developed and successfully applied to the UV images that provided qualitative information on physical tablet surface properties such as intactness and surface density profiles, as well as quantitative information on variations in the surface density. In conclusion, this study demonstrates that UV imaging combined with image analysis is an effective and non-destructive method to determine chemical and physical quality attributes of tablets and is a promising approach for (near) real-time monitoring of the tablet compaction process and formulation optimization purposes. Copyright © 2015 Elsevier B.V. All rights reserved.
Evaluation of skin melanoma in spectral range 450-950 nm using principal component analysis
NASA Astrophysics Data System (ADS)
Jakovels, D.; Lihacova, I.; Kuzmina, I.; Spigulis, J.
2013-06-01
Diagnostic potential of principal component analysis (PCA) of multi-spectral imaging data in the wavelength range 450- 950 nm for distant skin melanoma recognition is discussed. Processing of the measured clinical data by means of PCA resulted in clear separation between malignant melanomas and pigmented nevi.
Fingerprint enhancement using a multispectral sensor
NASA Astrophysics Data System (ADS)
Rowe, Robert K.; Nixon, Kristin A.
2005-03-01
The level of performance of a biometric fingerprint sensor is critically dependent on the quality of the fingerprint images. One of the most common types of optical fingerprint sensors relies on the phenomenon of total internal reflectance (TIR) to generate an image. Under ideal conditions, a TIR fingerprint sensor can produce high-contrast fingerprint images with excellent feature definition. However, images produced by the same sensor under conditions that include dry skin, dirt on the skin, and marginal contact between the finger and the sensor, are likely to be severely degraded. This paper discusses the use of multispectral sensing as a means to collect additional images with new information about the fingerprint that can significantly augment the system performance under both normal and adverse sample conditions. In the context of this paper, "multispectral sensing" is used to broadly denote a collection of images taken under different illumination conditions: different polarizations, different illumination/detection configurations, as well as different wavelength illumination. Results from three small studies using an early-stage prototype of the multispectral-TIR (MTIR) sensor are presented along with results from the corresponding TIR data. The first experiment produced data from 9 people, 4 fingers from each person and 3 measurements per finger under "normal" conditions. The second experiment provided results from a study performed to test the relative performance of TIR and MTIR images when taken under extreme dry and dirty conditions. The third experiment examined the case where the area of contact between the finger and sensor is greatly reduced.
Documentation of procedures for textural/spatial pattern recognition techniques
NASA Technical Reports Server (NTRS)
Haralick, R. M.; Bryant, W. F.
1976-01-01
A C-130 aircraft was flown over the Sam Houston National Forest on March 21, 1973 at 10,000 feet altitude to collect multispectral scanner (MSS) data. Existing textural and spatial automatic processing techniques were used to classify the MSS imagery into specified timber categories. Several classification experiments were performed on this data using features selected from the spectral bands and a textural transform band. The results indicate that (1) spatial post-processing a classified image can cut the classification error to 1/2 or 1/3 of its initial value, (2) spatial post-processing the classified image using combined spectral and textural features produces a resulting image with less error than post-processing a classified image using only spectral features and (3) classification without spatial post processing using the combined spectral textural features tends to produce about the same error rate as a classification without spatial post processing using only spectral features.
Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes.
Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M
2018-04-12
Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods.
Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes
Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M.
2018-01-01
Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods. PMID:29649114
Multispectral image enhancement for H&E stained pathological tissue specimens
NASA Astrophysics Data System (ADS)
Bautista, Pinky A.; Abe, Tokiya; Yamaguchi, Masahiro; Ohyama, Nagaaki; Yagi, Yukako
2008-03-01
The presence of a liver disease such as cirrhosis can be determined by examining the proliferation of collagen fiber from a tissue slide stained with special stain such as the Masson's trichrome(MT) stain. Collagen fiber and smooth muscle, which are both stained the same in an H&E stained slide, are stained blue and pink respectively in an MT-stained slide. In this paper we show that with multispectral imaging the difference between collagen fiber and smooth muscle can be visualized even from an H&E stained image. In the method M KL bases are derived using the spectral data of those H&E stained tissue components which can be easily differentiated from each other, i.e. nucleus, cytoplasm, red blood cells, etc. and based on the spectral residual error of fiber weighting factors are determined to enhance spectral features at certain wavelengths. Results of our experiment demonstrate the capability of multispectral imaging and its advantage compared to the conventional RGB imaging systems to delineate tissue structures with subtle colorimetric difference.
Coastal modification of a scene employing multispectral images and vector operators.
Lira, Jorge
2017-05-01
Changes in sea level, wind patterns, sea current patterns, and tide patterns have produced morphologic transformations in the coastline area of Tamaulipas Sate in North East Mexico. Such changes generated a modification of the coastline and variations of the texture-relief and texture of the continental area of Tamaulipas. Two high-resolution multispectral satellite Satellites Pour l'Observation de la Terre images were employed to quantify the morphologic change of such continental area. The images cover a time span close to 10 years. A variant of the principal component analysis was used to delineate the modification of the land-water line. To quantify changes in texture-relief and texture, principal component analysis was applied to the multispectral images. The first principal components of each image were modeled as a discrete bidimensional vector field. The divergence and Laplacian vector operators were applied to the discrete vector field. The divergence provided the change of texture, while the Laplacian produced the change of texture-relief in the area of study.
Design and development of a simple UV fluorescence multi-spectral imaging system
NASA Astrophysics Data System (ADS)
Tovar, Carlos; Coker, Zachary; Yakovlev, Vladislav V.
2018-02-01
Healthcare access in low-resource settings is compromised by the availability of affordable and accurate diagnostic equipment. The four primary poverty-related diseases - AIDS, pneumonia, malaria, and tuberculosis - account for approximately 400 million annual deaths worldwide as of 2016 estimates. Current diagnostic procedures for these diseases are prolonged and can become unreliable under various conditions. We present the development of a simple low-cost UV fluorescence multi-spectral imaging system geared towards low resource settings for a variety of biological and in-vitro applications. Fluorescence microscopy serves as a useful diagnostic indicator and imaging tool. The addition of a multi-spectral imaging modality allows for the detection of fluorophores within specific wavelength bands, as well as the distinction between fluorophores possessing overlapping spectra. The developed instrument has the potential for a very diverse range of diagnostic applications in basic biomedical science and biomedical diagnostics and imaging. Performance assessment of the microscope will be validated with a variety of samples ranging from organic compounds to biological samples.
NASA Astrophysics Data System (ADS)
Clark, M. L.
2016-12-01
The goal of this study was to assess multi-temporal, Hyperspectral Infrared Imager (HyspIRI) satellite imagery for improved forest class mapping relative to multispectral satellites. The study area was the western San Francisco Bay Area, California and forest alliances (e.g., forest communities defined by dominant or co-dominant trees) were defined using the U.S. National Vegetation Classification System. Simulated 30-m HyspIRI, Landsat 8 and Sentinel-2 imagery were processed from image data acquired by NASA's AVIRIS airborne sensor in year 2015, with summer and multi-temporal (spring, summer, fall) data analyzed separately. HyspIRI reflectance was used to generate a suite of hyperspectral metrics that targeted key spectral features related to chemical and structural properties. The Random Forests classifier was applied to the simulated images and overall accuracies (OA) were compared to those from real Landsat 8 images. For each image group, broad land cover (e.g., Needle-leaf Trees, Broad-leaf Trees, Annual agriculture, Herbaceous, Built-up) was classified first, followed by a finer-detail forest alliance classification for pixels mapped as closed-canopy forest. There were 5 needle-leaf tree alliances and 16 broad-leaf tree alliances, including 7 Quercus (oak) alliance types. No forest alliance classification exceeded 50% OA, indicating that there was broad spectral similarity among alliances, most of which were not spectrally pure but rather a mix of tree species. In general, needle-leaf (Pine, Redwood, Douglas Fir) alliances had better class accuracies than broad-leaf alliances (Oaks, Madrone, Bay Laurel, Buckeye, etc). Multi-temporal data classifications all had 5-6% greater OA than with comparable summer data. For simulated data, HyspIRI metrics had 4-5% greater OA than Landsat 8 and Sentinel-2 multispectral imagery and 3-4% greater OA than HyspIRI reflectance. Finally, HyspIRI metrics had 8% greater OA than real Landsat 8 imagery. In conclusion, forest alliance classification was found to be a difficult remote sensing application with moderate resolution (30 m) satellite imagery; however, of the data tested, HyspIRI spectral metrics had the best performance relative to multispectral satellites.
Galileo multispectral imaging of Earth.
Geissler, P; Thompson, W R; Greenberg, R; Moersch, J; McEwen, A; Sagan, C
1995-08-25
Nearly 6000 multispectral images of Earth were acquired by the Galileo spacecraft during its two flybys. The Galileo images offer a unique perspective on our home planet through the spectral capability made possible by four narrowband near-infrared filters, intended for observations of methane in Jupiter's atmosphere, which are not incorporated in any of the currently operating Earth orbital remote sensing systems. Spectral variations due to mineralogy, vegetative cover, and condensed water are effectively mapped by the visible and near-infrared multispectral imagery, showing a wide variety of biological, meteorological, and geological phenomena. Global tectonic and volcanic processes are clearly illustrated by these images, providing a useful basis for comparative planetary geology. Differences between plant species are detected through the narrowband IR filters on Galileo, allowing regional measurements of variation in the "red edge" of chlorophyll and the depth of the 1-micrometer water band, which is diagnostic of leaf moisture content. Although evidence of life is widespread in the Galileo data set, only a single image (at approximately 2 km/pixel) shows geometrization plausibly attributable to our technical civilization. Water vapor can be uniquely imaged in the Galileo 0.73-micrometer band, permitting spectral discrimination of moist and dry clouds with otherwise similar albedo. Surface snow and ice can be readily distinguished from cloud cover by narrowband imaging within the sensitivity range of Galileo's silicon CCD camera. Ice grain size variations can be mapped using the weak H2O absorption at 1 micrometer, a technique which may find important applications in the exploration of the moons of Jupiter. The Galileo images have the potential to make unique contributions to Earth science in the areas of geological, meteorological and biological remote sensing, due to the inclusion of previously untried narrowband IR filters. The vast scale and near global coverage of the Galileo data set complements the higher-resolution data from Earth orbiting systems and may provide a valuable reference point for future studies of global change.
Karagiannis, G; Salpistis, Chr; Sergiadis, G; Chryssoulakis, Y
2007-06-01
In the present work, a powerful tool for the investigation of paintings is presented. This permits the tuneable multispectral real time imaging between 200 and 5000 nm and the simultaneous multispectral acquisition of spectroscopic data from the same region. We propose the term infrared reflectoscopy for tuneable infrared imaging in paintings (Chryssonlakis and Chassery, The Application of Physicochemical Methods of Analysis and Image Processing Techniques to Painted Works of Art, Erasmus Project ICP-88-006-6, Athens, June, 1989) for a technique that is effective especially when the spectroscopic data acquisition is performed between 800 and 1900 nm. Elements such as underdrawings, old damage that is not visible to the naked eye, later interventions or overpaintings, hidden signatures, nonvisible inscriptions, and authenticity features can thus be detected with the overlying paint layers becoming successively "transparent" due to the deep infrared penetration. The spectroscopic data are collected from each point of the studied area with a 5 nm step through grey level measurement, after adequate infrared reflectance (%R) and curve calibration. The detection limits of the infrared detector as well as the power distribution of the radiation coming out through the micrometer slit assembly of the monochromator in use are also taken into account. Inorganic pigments can thus be identified and their physicochemical properties directly compared to the corresponding infrared images at each wavelength within the optimum region. In order to check its effectiveness, this method was applied on an experimental portable icon of a known stratigraphy.
Impervious surfaces mapping using high resolution satellite imagery
NASA Astrophysics Data System (ADS)
Shirmeen, Tahmina
In recent years, impervious surfaces have emerged not only as an indicator of the degree of urbanization, but also as an indicator of environmental quality. As impervious surface area increases, storm water runoff increases in velocity, quantity, temperature and pollution load. Any of these attributes can contribute to the degradation of natural hydrology and water quality. Various image processing techniques have been used to identify the impervious surfaces, however, most of the existing impervious surface mapping tools used moderate resolution imagery. In this project, the potential of standard image processing techniques to generate impervious surface data for change detection analysis using high-resolution satellite imagery was evaluated. The city of Oxford, MS was selected as the study site for this project. Standard image processing techniques, including Normalized Difference Vegetation Index (NDVI), Principal Component Analysis (PCA), a combination of NDVI and PCA, and image classification algorithms, were used to generate impervious surfaces from multispectral IKONOS and QuickBird imagery acquired in both leaf-on and leaf-off conditions. Accuracy assessments were performed, using truth data generated by manual classification, with Kappa statistics and Zonal statistics to select the most appropriate image processing techniques for impervious surface mapping. The performance of selected image processing techniques was enhanced by incorporating Soil Brightness Index (SBI) and Greenness Index (GI) derived from Tasseled Cap Transformed (TCT) IKONOS and QuickBird imagery. A time series of impervious surfaces for the time frame between 2001 and 2007 was made using the refined image processing techniques to analyze the changes in IS in Oxford. It was found that NDVI and the combined NDVI--PCA methods are the most suitable image processing techniques for mapping impervious surfaces in leaf-off and leaf-on conditions respectively, using high resolution multispectral imagery. It was also found that IS data generated by these techniques can be refined by removing the conflicting dry soil patches using SBI and GI obtained from TCT of the same imagery used for IS data generation. The change detection analysis of the IS time series shows that Oxford experienced the major changes in IS from the year 2001 to 2004 and 2006 to 2007.
Multispectral Analysis of NMR Imagery
NASA Technical Reports Server (NTRS)
Butterfield, R. L.; Vannier, M. W. And Associates; Jordan, D.
1985-01-01
Conference paper discusses initial efforts to adapt multispectral satellite-image analysis to nuclear magnetic resonance (NMR) scans of human body. Flexibility of these techniques makes it possible to present NMR data in variety of formats, including pseudocolor composite images of pathological internal features. Techniques do not have to be greatly modified from form in which used to produce satellite maps of such Earth features as water, rock, or foliage.
NASA Astrophysics Data System (ADS)
Bell, James F.; Wellington, Danika; Hardgrove, Craig; Godber, Austin; Rice, Melissa S.; Johnson, Jeffrey R.; Fraeman, Abigail
2016-10-01
The Mars Science Laboratory (MSL) Curiosity rover Mastcam is a pair of multispectral CCD cameras that have been imaging the surface and atmosphere in three broadband visible RGB color channels as well as nine additional narrowband color channels between 400 and 1000 nm since the rover's landing in August 2012. As of Curiosity sol 1159 (the most recent PDS data release as of this writing), approximately 140 multispectral imaging targets have been imaged using all twelve unique bandpasses. Near-simultaneous imaging of an onboard calibration target allows rapid relative reflectance calibration of these data to radiance factor and estimated Lambert albedo, for direct comparison to lab reflectance spectra of rocks, minerals, and mixtures. Surface targets among this data set include a variety of outcrop and float rocks (some containing light-toned veins), unconsolidated pebbles and clasts, and loose sand and soil. Some of these targets have been brushed, scuffed, or otherwise disturbed by the rover in order to reveal the (less dusty) interiors of these materials, and those targets and each of Curiosity's drill holes and tailings piles have been specifically targeted for multispectral imaging.Analysis of the relative reflectance spectra of these materials, sometimes in concert with additional compositional and/or mineralogic information from Curiosity's ChemCam LIBS and passive-mode spectral data and CheMin XRD data, reveals the presence of relatively broad solid state crystal field and charge transfer absorption features characteristic of a variety of common iron-bearing phases, including hematite (both nanophase and crystalline), ferric sulfate, olivine, and pyroxene. In addition, Mastcam is sensitive to a weak hydration feature in the 900-1000 nm region that can provide insight on the hydration state of some of these phases, especially sulfates. Here we summarize the Mastcam multispectral data set and the major potential phase identifications made using that data set during the traverse so far in Gale crater, and describe the ways that Mastcam multispectral observations will continue to inform the ongoing ascent and exploration of Mt. Sharp, Gale crater's layered central mound of sedimentary rocks.
NASA Astrophysics Data System (ADS)
Matikainen, L.; Karila, K.; Hyyppä, J.; Puttonen, E.; Litkey, P.; Ahokas, E.
2017-10-01
This article summarises our first results and experiences on the use of multispectral airborne laser scanner (ALS) data. Optech Titan multispectral ALS data over a large suburban area in Finland were acquired on three different dates in 2015-2016. We investigated the feasibility of the data from the first date for land cover classification and road mapping. Object-based analyses with segmentation and random forests classification were used. The potential of the data for change detection of buildings and roads was also demonstrated. The overall accuracy of land cover classification results with six classes was 96 % compared with validation points. The data also showed high potential for road detection, road surface classification and change detection. The multispectral intensity information appeared to be very important for automated classifications. Compared to passive aerial images, the intensity images have interesting advantages, such as the lack of shadows. Currently, we focus on analyses and applications with the multitemporal multispectral data. Important questions include, for example, the potential and challenges of the multitemporal data for change detection.
Novikova, Anna; Carstensen, Jens M; Rades, Thomas; Leopold, Prof Dr Claudia S
2016-12-30
In the present study the applicability of multispectral UV imaging in combination with multivariate image analysis for surface evaluation of MUPS tablets was investigated with respect to the differentiation of the API pellets from the excipients matrix, estimation of the drug content as well as pellet distribution, and influence of the coating material and tablet thickness on the predictive model. Different formulations consisting of coated drug pellets with two coating polymers (Aquacoat ® ECD and Eudragit ® NE 30 D) at three coating levels each were compressed to MUPS tablets with various amounts of coated pellets and different tablet thicknesses. The coated drug pellets were clearly distinguishable from the excipients matrix using a partial least squares approach regardless of the coating layer thickness and coating material used. Furthermore, the number of the detected drug pellets on the tablet surface allowed an estimation of the true drug content in the respective MUPS tablet. In addition, the pellet distribution in the MUPS formulations could be estimated by UV image analysis of the tablet surface. In conclusion, this study revealed that UV imaging in combination with multivariate image analysis is a promising approach for the automatic quality control of MUPS tablets during the manufacturing process. Copyright © 2016 Elsevier B.V. All rights reserved.
Kim, Min-Gab; Kim, Jin-Yong
2018-05-01
In this paper, we introduce a method to overcome the limitation of thickness measurement of a micro-patterned thin film. A spectroscopic imaging reflectometer system that consists of an acousto-optic tunable filter, a charge-coupled-device camera, and a high-magnitude objective lens was proposed, and a stack of multispectral images was generated. To secure improved accuracy and lateral resolution in the reconstruction of a two-dimensional thin film thickness, prior to the analysis of spectral reflectance profiles from each pixel of multispectral images, the image restoration based on an iterative deconvolution algorithm was applied to compensate for image degradation caused by blurring.
Multispectral simulation environment for modeling low-light-level sensor systems
NASA Astrophysics Data System (ADS)
Ientilucci, Emmett J.; Brown, Scott D.; Schott, John R.; Raqueno, Rolando V.
1998-11-01
Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high contrast imagery. It has been demonstrated that processing techniques can further enhance the quality of this imagery. For example, fusion with matching thermal IR imagery can improve image content when very little visible region contrast is available. To aid in the improvement of current algorithms and the development of new ones, a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low- light-level conditions is desired. This paper describes a modeling environment attempting to meet these criteria by addressing the task as two individual components: (1) prediction of a low-light-level radiance field from an arbitrary scene, and (2) simulation of the output from a low- light-level sensor for a given radiance field. The radiance prediction engine utilized in this environment is the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model which is a first principles based multi-spectral synthetic image generation model capable of producing an arbitrary number of bands in the 0.28 to 20 micrometer region. The DIRSIG model is utilized to produce high spatial and spectral resolution radiance field images. These images are then processed by a user configurable multi-stage low-light-level sensor model that applies the appropriate noise and modulation transfer function (MTF) at each stage in the image processing chain. This includes the ability to reproduce common intensifying sensor artifacts such as saturation and 'blooming.' Additionally, co-registered imagery in other spectral bands may be simultaneously generated for testing fusion and exploitation algorithms. This paper discusses specific aspects of the DIRSIG radiance prediction for low- light-level conditions including the incorporation of natural and man-made sources which emphasizes the importance of accurate BRDF. A description of the implementation of each stage in the image processing and capture chain for the LLL model is also presented. Finally, simulated images are presented and qualitatively compared to lab acquired imagery from a commercial system.
Portable multispectral imaging system for oral cancer diagnosis
NASA Astrophysics Data System (ADS)
Hsieh, Yao-Fang; Ou-Yang, Mang; Lee, Cheng-Chung
2013-09-01
This study presents the portable multispectral imaging system that can acquire the image of specific spectrum in vivo for oral cancer diagnosis. According to the research literature, the autofluorescence of cells and tissue have been widely applied to diagnose oral cancer. The spectral distribution is difference for lesions of epithelial cells and normal cells after excited fluorescence. We have been developed the hyperspectral and multispectral techniques for oral cancer diagnosis in three generations. This research is the third generation. The excited and emission spectrum for the diagnosis are acquired from the research of first generation. The portable system for detection of oral cancer is modified for existing handheld microscope. The UV LED is used to illuminate the surface of oral cavity and excite the cells to produce fluorescent. The image passes through the central channel and filters out unwanted spectrum by the selection of filter, and focused by the focus lens on the image sensor. Therefore, we can achieve the specific wavelength image via fluorescence reaction. The specificity and sensitivity of the system are 85% and 90%, respectively.
Morphological Feature Extraction for Automatic Registration of Multispectral Images
NASA Technical Reports Server (NTRS)
Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.
2007-01-01
The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.
Cartography for lunar exploration: 2008 status and mission plans
Kirk, R.L.; Archinal, B.A.; Gaddis, L.R.; Rosiek, M.R.; Chen, Jun; Jiang, Jie; Nayak, Shailesh
2008-01-01
The initial spacecraft exploration of the Moon in the 1960s-70s yielded extensive data, primarily in the form of film and television images, which were used to produce a large number of hardcopy maps by conventional techniques. A second era of exploration, beginning in the early 1990s, has produced digital data including global multispectral imagery and altimetry, from which a new generation of digital map products tied to a rapidly evolving global control network has been made. Efforts are also underway to scan the earlier hardcopy maps for online distribution and to digitize the film images so that modern processing techniques can be used to make high-resolution digital terrain models (DTMs) and image mosaics consistent with the current global control. The pace of lunar exploration is accelerating dramatically, with as many as eight new missions already launched or planned for the current decade. These missions, of which the most important for cartography are SMART-1 (Europe), Kaguya/SELENE (Japan), Chang'e-1 (China), Chandrayaan-1 (India), and Lunar Reconnaissance Orbiter (USA), will return a volume of data exceeding that of all previous lunar and planetary missions combined. Framing and scanner camera images, including multispectral and stereo data, hyperspectral images, synthetic aperture radar (SAR) images, and laser altimetry will all be collected, including, in most cases, multiple data sets of each type. Substantial advances in international standardization and cooperation, development of new and more efficient data processing methods, and availability of resources for processing and archiving will all be needed if the next generation of missions are to fulfill their potential for high-precision mapping of the Moon in support of subsequent exploration and scientific investigation.
NASA Astrophysics Data System (ADS)
Dementev, A. O.; Dmitriev, E. V.; Kozoderov, V. V.; Egorov, V. D.
2017-10-01
Hyperspectral imaging is up-to-date promising technology widely applied for the accurate thematic mapping. The presence of a large number of narrow survey channels allows us to use subtle differences in spectral characteristics of objects and to make a more detailed classification than in the case of using standard multispectral data. The difficulties encountered in the processing of hyperspectral images are usually associated with the redundancy of spectral information which leads to the problem of the curse of dimensionality. Methods currently used for recognizing objects on multispectral and hyperspectral images are usually based on standard base supervised classification algorithms of various complexity. Accuracy of these algorithms can be significantly different depending on considered classification tasks. In this paper we study the performance of ensemble classification methods for the problem of classification of the forest vegetation. Error correcting output codes and boosting are tested on artificial data and real hyperspectral images. It is demonstrates, that boosting gives more significant improvement when used with simple base classifiers. The accuracy in this case in comparable the error correcting output code (ECOC) classifier with Gaussian kernel SVM base algorithm. However the necessity of boosting ECOC with Gaussian kernel SVM is questionable. It is demonstrated, that selected ensemble classifiers allow us to recognize forest species with high enough accuracy which can be compared with ground-based forest inventory data.
NASA Astrophysics Data System (ADS)
Li, Jiao; Zhang, Songhe; Chekkoury, Andrei; Glasl, Sarah; Vetschera, Paul; Koberstein-Schwarz, Benno; Omar, Murad; Ntziachristos, Vasilis
2017-03-01
Multispectral optoacoustic mesoscopy (MSOM) has been recently introduced for cancer imaging, it has the potential for high resolution imaging of cancer development in vivo, at depths beyond the diffusion limit. Based on spectral features, optoacoustic imaging is capable of visualizing angiogenesis and imaging cancer heterogeneity of malignant tumors through endogenous hemoglobin. However, high-resolution structural and functional imaging of whole tumor mass is limited by modest penetration and image quality, due to the insufficient capability of ultrasound detectors and the twodimensional scan geometry. In this study, we introduce a novel multi-spectral optoacoustic mesoscopy (MSOM) for imaging subcutaneous or orthotopic tumors implanted in lab mice, with the high-frequency ultrasound linear array and a conical scanning geometry. Detailed volumetric images of vasculature and oxygen saturation of tissue in the entire tumors are obtained in vivo, at depths up to 10 mm with the desirable spatial resolutions approaching 70μm. This unprecedented performance enables the visualization of vasculature morphology and hypoxia conditions has been verified with ex vivo studies. These findings demonstrate the potential of MSOM for preclinical oncological studies in deep solid tumors to facilitate the characterization of tumor's angiogenesis and the evaluation of treatment strategies.
NASA Astrophysics Data System (ADS)
Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling
2018-02-01
This paper proposes an optimized lighting method of applying a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting method is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square method is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed method were both significantly improved. The optimum method opens up avenues for the hyperspectral imaging of biological tissue.
Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling
2018-02-01
This paper proposes an optimized lighting method of applying a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting method is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square method is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed method were both significantly improved. The optimum method opens up avenues for the hyperspectral imaging of biological tissue.
NASA Technical Reports Server (NTRS)
1976-01-01
Papers are presented on the applicability of Landsat data to water management and control needs, IBIS, a geographic information system based on digital image processing and image raster datatype, and the Image Data Access Method (IDAM) for the Earth Resources Interactive Processing System. Attention is also given to the Prototype Classification and Mensuration System (PROCAMS) applied to agricultural data, the use of Landsat for water quality monitoring in North Carolina, and the analysis of geophysical remote sensing data using multivariate pattern recognition. The Illinois crop-acreage estimation experiment, the Pacific Northwest Resources Inventory Demonstration, and the effects of spatial misregistration on multispectral recognition are also considered. Individual items are announced in this issue.
Kupssinskü, Lucas S.; T. Guimarães, Tainá; Koste, Emilie C.; da Silva, Juarez M.; de Souza, Laís V.; Oliverio, William F. M.; Jardim, Rogélio S.; Koch, Ismael É.; de Souza, Jonas G.; Mauad, Frederico F.
2018-01-01
Water quality monitoring through remote sensing with UAVs is best conducted using multispectral sensors; however, these sensors are expensive. We aimed to predict multispectral bands from a low-cost sensor (R, G, B bands) using artificial neural networks (ANN). We studied a lake located on the campus of Unisinos University, Brazil, using a low-cost sensor mounted on a UAV. Simultaneously, we collected water samples during the UAV flight to determine total suspended solids (TSS) and dissolved organic matter (DOM). We correlated the three bands predicted with TSS and DOM. The results show that the ANN validation process predicted the three bands of the multispectral sensor using the three bands of the low-cost sensor with a low average error of 19%. The correlations with TSS and DOM resulted in R2 values of greater than 0.60, consistent with literature values. PMID:29315219
NASA Astrophysics Data System (ADS)
Dechesne, Clément; Mallet, Clément; Le Bris, Arnaud; Gouet-Brunet, Valérie
2017-04-01
Forest stands are the basic units for forest inventory and mapping. Stands are defined as large forested areas (e.g., ⩾ 2 ha) of homogeneous tree species composition and age. Their accurate delineation is usually performed by human operators through visual analysis of very high resolution (VHR) infra-red images. This task is tedious, highly time consuming, and should be automated for scalability and efficient updating purposes. In this paper, a method based on the fusion of airborne lidar data and VHR multispectral images is proposed for the automatic delineation of forest stands containing one dominant species (purity superior to 75%). This is the key preliminary task for forest land-cover database update. The multispectral images give information about the tree species whereas 3D lidar point clouds provide geometric information on the trees and allow their individual extraction. Multi-modal features are computed, both at pixel and object levels: the objects are individual trees extracted from lidar data. A supervised classification is then performed at the object level in order to coarsely discriminate the existing tree species in each area of interest. The classification results are further processed to obtain homogeneous areas with smooth borders by employing an energy minimum framework, where additional constraints are joined to form the energy function. The experimental results show that the proposed method provides very satisfactory results both in terms of stand labeling and delineation (overall accuracy ranges between 84 % and 99 %).
Torres-Sánchez, Jorge; López-Granados, Francisca; De Castro, Ana Isabel; Peña-Barragán, José Manuel
2013-01-01
A new aerial platform has risen recently for image acquisition, the Unmanned Aerial Vehicle (UAV). This article describes the technical specifications and configuration of a UAV used to capture remote images for early season site- specific weed management (ESSWM). Image spatial and spectral properties required for weed seedling discrimination were also evaluated. Two different sensors, a still visible camera and a six-band multispectral camera, and three flight altitudes (30, 60 and 100 m) were tested over a naturally infested sunflower field. The main phases of the UAV workflow were the following: 1) mission planning, 2) UAV flight and image acquisition, and 3) image pre-processing. Three different aspects were needed to plan the route: flight area, camera specifications and UAV tasks. The pre-processing phase included the correct alignment of the six bands of the multispectral imagery and the orthorectification and mosaicking of the individual images captured in each flight. The image pixel size, area covered by each image and flight timing were very sensitive to flight altitude. At a lower altitude, the UAV captured images of finer spatial resolution, although the number of images needed to cover the whole field may be a limiting factor due to the energy required for a greater flight length and computational requirements for the further mosaicking process. Spectral differences between weeds, crop and bare soil were significant in the vegetation indices studied (Excess Green Index, Normalised Green-Red Difference Index and Normalised Difference Vegetation Index), mainly at a 30 m altitude. However, greater spectral separability was obtained between vegetation and bare soil with the index NDVI. These results suggest that an agreement among spectral and spatial resolutions is needed to optimise the flight mission according to every agronomical objective as affected by the size of the smaller object to be discriminated (weed plants or weed patches).
Torres-Sánchez, Jorge; López-Granados, Francisca; De Castro, Ana Isabel; Peña-Barragán, José Manuel
2013-01-01
A new aerial platform has risen recently for image acquisition, the Unmanned Aerial Vehicle (UAV). This article describes the technical specifications and configuration of a UAV used to capture remote images for early season site- specific weed management (ESSWM). Image spatial and spectral properties required for weed seedling discrimination were also evaluated. Two different sensors, a still visible camera and a six-band multispectral camera, and three flight altitudes (30, 60 and 100 m) were tested over a naturally infested sunflower field. The main phases of the UAV workflow were the following: 1) mission planning, 2) UAV flight and image acquisition, and 3) image pre-processing. Three different aspects were needed to plan the route: flight area, camera specifications and UAV tasks. The pre-processing phase included the correct alignment of the six bands of the multispectral imagery and the orthorectification and mosaicking of the individual images captured in each flight. The image pixel size, area covered by each image and flight timing were very sensitive to flight altitude. At a lower altitude, the UAV captured images of finer spatial resolution, although the number of images needed to cover the whole field may be a limiting factor due to the energy required for a greater flight length and computational requirements for the further mosaicking process. Spectral differences between weeds, crop and bare soil were significant in the vegetation indices studied (Excess Green Index, Normalised Green-Red Difference Index and Normalised Difference Vegetation Index), mainly at a 30 m altitude. However, greater spectral separability was obtained between vegetation and bare soil with the index NDVI. These results suggest that an agreement among spectral and spatial resolutions is needed to optimise the flight mission according to every agronomical objective as affected by the size of the smaller object to be discriminated (weed plants or weed patches). PMID:23483997
Multispectral Imaging in Cultural Heritage Conservation
NASA Astrophysics Data System (ADS)
Del Pozo, S.; Rodríguez-Gonzálvez, P.; Sánchez-Aparicio, L. J.; Muñoz-Nieto, A.; Hernández-López, D.; Felipe-García, B.; González-Aguilera, D.
2017-08-01
This paper sums up the main contribution derived from the thesis entitled "Multispectral imaging for the analysis of materials and pathologies in civil engineering, constructions and natural spaces" awarded by CIPA-ICOMOS for its connection with the preservation of Cultural Heritage. This thesis is framed within close-range remote sensing approaches by the fusion of sensors operating in the optical domain (visible to shortwave infrared spectrum). In the field of heritage preservation, multispectral imaging is a suitable technique due to its non-destructive nature and its versatility. It combines imaging and spectroscopy to analyse materials and land covers and enables the use of a variety of different geomatic sensors for this purpose. These sensors collect both spatial and spectral information for a given scenario and a specific spectral range, so that, their smaller storage units save the spectral properties of the radiation reflected by the surface of interest. The main goal of this research work is to characterise different construction materials as well as the main pathologies of Cultural Heritage elements by combining active and passive sensors recording data in different ranges. Conclusions about the suitability of each type of sensor and spectral range are drawn in relation to each particular case study and damage. It should be emphasised that results are not limited to images, since 3D intensity data from laser scanners can be integrated with 2D data from passive sensors obtaining high quality products due to the added value that metric brings to multispectral images.
van den Berg, Nynke S; Buckle, Tessa; KleinJan, Gijs H; van der Poel, Henk G; van Leeuwen, Fijs W B
2017-07-01
During (robot-assisted) sentinel node (SN) biopsy procedures, intraoperative fluorescence imaging can be used to enhance radioguided SN excision. For this combined pre- and intraoperative SN identification was realized using the hybrid SN tracer, indocyanine green- 99m Tc-nanocolloid. Combining this dedicated SN tracer with a lymphangiographic tracer such as fluorescein may further enhance the accuracy of SN biopsy. Clinical evaluation of a multispectral fluorescence guided surgery approach using the dedicated SN tracer ICG- 99m Tc-nanocolloid, the lymphangiographic tracer fluorescein, and a commercially available fluorescence laparoscope. Pilot study in ten patients with prostate cancer. Following ICG- 99m Tc-nanocolloid administration and preoperative lymphoscintigraphy and single-photon emission computed tomograpy imaging, the number and location of SNs were determined. Fluorescein was injected intraprostatically immediately after the patient was anesthetized. A multispectral fluorescence laparoscope was used intraoperatively to identify both fluorescent signatures. Multispectral fluorescence imaging during robot-assisted radical prostatectomy with extended pelvic lymph node dissection and SN biopsy. (1) Number and location of preoperatively identified SNs. (2) Number and location of SNs intraoperatively identified via ICG- 99m Tc-nanocolloid imaging. (3) Rate of intraoperative lymphatic duct identification via fluorescein imaging. (4) Tumor status of excised (sentinel) lymph node(s). (5) Postoperative complications and follow-up. Near-infrared fluorescence imaging of ICG- 99m Tc-nanocolloid visualized 85.3% of the SNs. In 8/10 patients, fluorescein imaging allowed bright and accurate identification of lymphatic ducts, although higher background staining and tracer washout were observed. The main limitation is the small patient population. Our findings indicate that a lymphangiographic tracer can provide additional information during SN biopsy based on ICG- 99m Tc-nanocolloid. The study suggests that multispectral fluorescence image-guided surgery is clinically feasible. We evaluated the concept of surgical fluorescence guidance using differently colored dyes that visualize complementary features. In the future this concept may provide better guidance towards diseased tissue while sparing healthy tissue, and could thus improve functional and oncologic outcomes. Copyright © 2016 European Association of Urology. Published by Elsevier B.V. All rights reserved.
GPU Lossless Hyperspectral Data Compression System
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh I.; Keymeulen, Didier; Kiely, Aaron B.; Klimesh, Matthew A.
2014-01-01
Hyperspectral imaging systems onboard aircraft or spacecraft can acquire large amounts of data, putting a strain on limited downlink and storage resources. Onboard data compression can mitigate this problem but may require a system capable of a high throughput. In order to achieve a high throughput with a software compressor, a graphics processing unit (GPU) implementation of a compressor was developed targeting the current state-of-the-art GPUs from NVIDIA(R). The implementation is based on the fast lossless (FL) compression algorithm reported in "Fast Lossless Compression of Multispectral-Image Data" (NPO- 42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which operates on hyperspectral data and achieves excellent compression performance while having low complexity. The FL compressor uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. The new Consultative Committee for Space Data Systems (CCSDS) Standard for Lossless Multispectral & Hyperspectral image compression (CCSDS 123) is based on the FL compressor. The software makes use of the highly-parallel processing capability of GPUs to achieve a throughput at least six times higher than that of a software implementation running on a single-core CPU. This implementation provides a practical real-time solution for compression of data from airborne hyperspectral instruments.
Using spectral imaging for the analysis of abnormalities for colorectal cancer: When is it helpful?
Awan, Ruqayya; Al-Maadeed, Somaya; Al-Saady, Rafif
2018-01-01
The spectral imaging technique has been shown to provide more discriminative information than the RGB images and has been proposed for a range of problems. There are many studies demonstrating its potential for the analysis of histopathology images for abnormality detection but there have been discrepancies among previous studies as well. Many multispectral based methods have been proposed for histopathology images but the significance of the use of whole multispectral cube versus a subset of bands or a single band is still arguable. We performed comprehensive analysis using individual bands and different subsets of bands to determine the effectiveness of spectral information for determining the anomaly in colorectal images. Our multispectral colorectal dataset consists of four classes, each represented by infra-red spectrum bands in addition to the visual spectrum bands. We performed our analysis of spectral imaging by stratifying the abnormalities using both spatial and spectral information. For our experiments, we used a combination of texture descriptors with an ensemble classification approach that performed best on our dataset. We applied our method to another dataset and got comparable results with those obtained using the state-of-the-art method and convolutional neural network based method. Moreover, we explored the relationship of the number of bands with the problem complexity and found that higher number of bands is required for a complex task to achieve improved performance. Our results demonstrate a synergy between infra-red and visual spectrum by improving the classification accuracy (by 6%) on incorporating the infra-red representation. We also highlight the importance of how the dataset should be divided into training and testing set for evaluating the histopathology image-based approaches, which has not been considered in previous studies on multispectral histopathology images.
Using spectral imaging for the analysis of abnormalities for colorectal cancer: When is it helpful?
Al-Maadeed, Somaya; Al-Saady, Rafif
2018-01-01
The spectral imaging technique has been shown to provide more discriminative information than the RGB images and has been proposed for a range of problems. There are many studies demonstrating its potential for the analysis of histopathology images for abnormality detection but there have been discrepancies among previous studies as well. Many multispectral based methods have been proposed for histopathology images but the significance of the use of whole multispectral cube versus a subset of bands or a single band is still arguable. We performed comprehensive analysis using individual bands and different subsets of bands to determine the effectiveness of spectral information for determining the anomaly in colorectal images. Our multispectral colorectal dataset consists of four classes, each represented by infra-red spectrum bands in addition to the visual spectrum bands. We performed our analysis of spectral imaging by stratifying the abnormalities using both spatial and spectral information. For our experiments, we used a combination of texture descriptors with an ensemble classification approach that performed best on our dataset. We applied our method to another dataset and got comparable results with those obtained using the state-of-the-art method and convolutional neural network based method. Moreover, we explored the relationship of the number of bands with the problem complexity and found that higher number of bands is required for a complex task to achieve improved performance. Our results demonstrate a synergy between infra-red and visual spectrum by improving the classification accuracy (by 6%) on incorporating the infra-red representation. We also highlight the importance of how the dataset should be divided into training and testing set for evaluating the histopathology image-based approaches, which has not been considered in previous studies on multispectral histopathology images. PMID:29874262
Multispectral Filter Arrays: Recent Advances and Practical Implementation
Lapray, Pierre-Jean; Wang, Xingbo; Thomas, Jean-Baptiste; Gouton, Pierre
2014-01-01
Thanks to some technical progress in interferencefilter design based on different technologies, we can finally successfully implement the concept of multispectral filter array-based sensors. This article provides the relevant state-of-the-art for multispectral imaging systems and presents the characteristics of the elements of our multispectral sensor as a case study. The spectral characteristics are based on two different spatial arrangements that distribute eight different bandpass filters in the visible and near-infrared area of the spectrum. We demonstrate that the system is viable and evaluate its performance through sensor spectral simulation. PMID:25407904
NASA Astrophysics Data System (ADS)
Liu, Xin; Samil Yetik, Imam
2012-04-01
Use of multispectral magnetic resonance imaging has received a great interest for prostate cancer localization in research and clinical studies. Manual extraction of prostate tumors from multispectral magnetic resonance imaging is inefficient and subjective, while automated segmentation is objective and reproducible. For supervised, automated segmentation approaches, learning is essential to obtain the information from training dataset. However, in this procedure, all patients are assumed to have similar properties for the tumor and normal tissues, and the segmentation performance suffers since the variations across patients are ignored. To conquer this difficulty, we propose a new iterative normalization method based on relative intensity values of tumor and normal tissues to normalize multispectral magnetic resonance images and improve segmentation performance. The idea of relative intensity mimics the manual segmentation performed by human readers, who compare the contrast between regions without knowing the actual intensity values. We compare the segmentation performance of the proposed method with that of z-score normalization followed by support vector machine, local active contours, and fuzzy Markov random field. Our experimental results demonstrate that our method outperforms the three other state-of-the-art algorithms, and was found to have specificity of 0.73, sensitivity of 0.69, and accuracy of 0.79, significantly better than alternative methods.
Duval, Joseph S.
1985-01-01
Because the display and interpretation of satellite and aircraft remote-sensing data make extensive use of color film products, accurate reproduction of the color images is important. To achieve accurate color reproduction, the exposure and chemical processing of the film must be monitored and controlled. By using a combination of sensitometry, densitometry, and transfer functions that control film response curves, all of the different steps in the making of film images can be monitored and controlled. Because a sensitometer produces a calibrated exposure, the resulting step wedge can be used to monitor the chemical processing of the film. Step wedges put on film by image recording machines provide a means of monitoring the film exposure and color balance of the machines.
Sharpening advanced land imager multispectral data using a sensor model
Lemeshewsky, G.P.; ,
2005-01-01
The Advanced Land Imager (ALI) instrument on NASA's Earth Observing One (EO-1) satellite provides for nine spectral bands at 30m ground sample distance (GSD) and a 10m GSD panchromatic band. This report describes an image sharpening technique where the higher spatial resolution information of the panchromatic band is used to increase the spatial resolution of ALI multispectral (MS) data. To preserve the spectral characteristics, this technique combines reported deconvolution deblurring methods for the MS data with highpass filter-based fusion methods for the Pan data. The deblurring process uses the point spread function (PSF) model of the ALI sensor. Information includes calculation of the PSF from pre-launch calibration data. Performance was evaluated using simulated ALI MS data generated by degrading the spatial resolution of high resolution IKONOS satellite MS data. A quantitative measure of performance was the error between sharpened MS data and high resolution reference. This report also compares performance with that of a reported method that includes PSF information. Preliminary results indicate improved sharpening with the method reported here.
Comparison of existing digital image analysis systems for the analysis of Thematic Mapper data
NASA Technical Reports Server (NTRS)
Likens, W. C.; Wrigley, R. C.
1984-01-01
Most existing image analysis systems were designed with the Landsat Multi-Spectral Scanner in mind, leaving open the question of whether or not these systems could adequately process Thematic Mapper data. In this report, both hardware and software systems have been evaluated for compatibility with TM data. Lack of spectral analysis capability was not found to be a problem, though techniques for spatial filtering and texture varied. Computer processing speed and data storage of currently existing mini-computer based systems may be less than adequate. Upgrading to more powerful hardware may be required for many TM applications.
NASA Technical Reports Server (NTRS)
Anuta, P. E.
1975-01-01
Least squares approximation techniques were developed for use in computer aided correction of spatial image distortions for registration of multitemporal remote sensor imagery. Polynomials were first used to define image distortion over the entire two dimensional image space. Spline functions were then investigated to determine if the combination of lower order polynomials could approximate a higher order distortion with less computational difficulty. Algorithms for generating approximating functions were developed and applied to the description of image distortion in aircraft multispectral scanner imagery. Other applications of the techniques were suggested for earth resources data processing areas other than geometric distortion representation.
NASA Technical Reports Server (NTRS)
1993-01-01
Summit Envirosolutions of Minneapolis, Minnesota, used remote sensing images as a source for groundwater resource management. Summit is a full-service environmental consulting service specializing in hydrogeologic, environmental management, engineering and remediation services. CRSP collected, processed and analyzed multispectral/thermal imagery and aerial photography to compare remote sensing and Geographic Information System approaches to more traditional methods of environmental impact assessments and monitoring.
Development of a portable multispectral thermal infrared camera
NASA Technical Reports Server (NTRS)
Osterwisch, Frederick G.
1991-01-01
The purpose of this research and development effort was to design and build a prototype instrument designated the 'Thermal Infrared Multispectral Camera' (TIRC). The Phase 2 effort was a continuation of the Phase 1 feasibility study and preliminary design for such an instrument. The completed instrument designated AA465 has application in the field of geologic remote sensing and exploration. The AA465 Thermal Infrared Camera (TIRC) System is a field-portable multispectral thermal infrared camera operating over the 8.0 - 13.0 micron wavelength range. Its primary function is to acquire two-dimensional thermal infrared images of user-selected scenes. Thermal infrared energy emitted by the scene is collected, dispersed into ten 0.5 micron wide channels, and then measured and recorded by the AA465 System. This multispectral information is presented in real time on a color display to be used by the operator to identify spectral and spatial variations in the scenes emissivity and/or irradiance. This fundamental instrument capability has a wide variety of commercial and research applications. While ideally suited for two-man operation in the field, the AA465 System can be transported and operated effectively by a single user. Functionally, the instrument operates as if it were a single exposure camera. System measurement sensitivity requirements dictate relatively long (several minutes) instrument exposure times. As such, the instrument is not suited for recording time-variant information. The AA465 was fabricated, assembled, tested, and documented during this Phase 2 work period. The detailed design and fabrication of the instrument was performed during the period of June 1989 to July 1990. The software development effort and instrument integration/test extended from July 1990 to February 1991. Software development included an operator interface/menu structure, instrument internal control functions, DSP image processing code, and a display algorithm coding program. The instrument was delivered to NASA in March 1991. Potential commercial and research uses for this instrument are in its primary application as a field geologists exploration tool. Other applications have been suggested but not investigated in depth. These are measurements of process control in commercial materials processing and quality control functions which require information on surface heterogeneity.
Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E; Moran, Emilio
2008-01-01
Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin.
NASA Astrophysics Data System (ADS)
Salsone, Silvia; Taylor, Andrew; Gomez, Juliana; Pretty, Iain; Ellwood, Roger; Dickinson, Mark; Lombardo, Giuseppe; Zakian, Christian
2012-07-01
Near infrared (NIR) multispectral imaging is a novel noninvasive technique that maps and quantifies dental caries. The technique has the ability to reduce the confounding effect of stain present on teeth. The aim of this study was to develop and validate a quantitative NIR multispectral imaging system for caries detection and assessment against a histological reference standard. The proposed technique is based on spectral imaging at specific wavelengths in the range from 1000 to 1700 nm. A total of 112 extracted teeth (molars and premolars) were used and images of occlusal surfaces at different wavelengths were acquired. Three spectral reflectance images were combined to generate a quantitative lesion map of the tooth. The maximum value of the map at the corresponding histological section was used as the NIR caries score. The NIR caries score significantly correlated with the histological reference standard (Spearman's Coefficient=0.774, p<0.01). Caries detection sensitivities and specificities of 72% and 91% for sound areas, 36% and 79% for lesions on the enamel, and 82% and 69% for lesions in dentin were found. These results suggest that NIR spectral imaging is a novel and promising method for the detection, quantification, and mapping of dental caries.
Salsone, Silvia; Taylor, Andrew; Gomez, Juliana; Pretty, Iain; Ellwood, Roger; Dickinson, Mark; Lombardo, Giuseppe; Zakian, Christian
2012-07-01
Near infrared (NIR) multispectral imaging is a novel noninvasive technique that maps and quantifies dental caries. The technique has the ability to reduce the confounding effect of stain present on teeth. The aim of this study was to develop and validate a quantitative NIR multispectral imaging system for caries detection and assessment against a histological reference standard. The proposed technique is based on spectral imaging at specific wavelengths in the range from 1000 to 1700 nm. A total of 112 extracted teeth (molars and premolars) were used and images of occlusal surfaces at different wavelengths were acquired. Three spectral reflectance images were combined to generate a quantitative lesion map of the tooth. The maximum value of the map at the corresponding histological section was used as the NIR caries score. The NIR caries score significantly correlated with the histological reference standard (Spearman's Coefficient=0.774, p<0.01). Caries detection sensitivities and specificities of 72% and 91% for sound areas, 36% and 79% for lesions on the enamel, and 82% and 69% for lesions in dentin were found. These results suggest that NIR spectral imaging is a novel and promising method for the detection, quantification, and mapping of dental caries.
Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E.; Moran, Emilio
2009-01-01
Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin. PMID:19789716
Harada, Ryuichi; Okamura, Nobuyuki; Furumoto, Shozo; Yoshikawa, Takeo; Arai, Hiroyuki; Yanai, Kazuhiko; Kudo, Yukitsuka
2014-02-01
Selective visualization of amyloid-β and tau protein deposits will help to understand the pathophysiology of Alzheimer's disease (AD). Here, we introduce a novel fluorescent probe that can distinguish between these two deposits by multispectral fluorescence imaging technique. Fluorescence spectral analysis was performed using AD brain sections stained with novel fluorescence compounds. Competitive binding assay using [(3)H]-PiB was performed to evaluate the binding affinity of BF-188 for synthetic amyloid-β (Aβ) and tau fibrils. In AD brain sections, BF-188 clearly stained Aβ and tau protein deposits with different fluorescence spectra. In vitro binding assays indicated that BF-188 bound to both amyloid-β and tau fibrils with high affinity (K i < 10 nM). In addition, BF-188 showed an excellent blood-brain barrier permeability in mice. Multispectral imaging with BF-188 could potentially be used for selective in vivo imaging of tau deposits as well as amyloid-β in the brain.
Vanegas, Fernando; Bratanov, Dmitry; Powell, Kevin; Weiss, John; Gonzalez, Felipe
2018-01-17
Recent advances in remote sensed imagery and geospatial image processing using unmanned aerial vehicles (UAVs) have enabled the rapid and ongoing development of monitoring tools for crop management and the detection/surveillance of insect pests. This paper describes a (UAV) remote sensing-based methodology to increase the efficiency of existing surveillance practices (human inspectors and insect traps) for detecting pest infestations (e.g., grape phylloxera in vineyards). The methodology uses a UAV integrated with advanced digital hyperspectral, multispectral, and RGB sensors. We implemented the methodology for the development of a predictive model for phylloxera detection. In this method, we explore the combination of airborne RGB, multispectral, and hyperspectral imagery with ground-based data at two separate time periods and under different levels of phylloxera infestation. We describe the technology used-the sensors, the UAV, and the flight operations-the processing workflow of the datasets from each imagery type, and the methods for combining multiple airborne with ground-based datasets. Finally, we present relevant results of correlation between the different processed datasets. The objective of this research is to develop a novel methodology for collecting, processing, analising and integrating multispectral, hyperspectral, ground and spatial data to remote sense different variables in different applications, such as, in this case, plant pest surveillance. The development of such methodology would provide researchers, agronomists, and UAV practitioners reliable data collection protocols and methods to achieve faster processing techniques and integrate multiple sources of data in diverse remote sensing applications.
Digital techniques for processing Landsat imagery
NASA Technical Reports Server (NTRS)
Green, W. B.
1978-01-01
An overview of the basic techniques used to process Landsat images with a digital computer, and the VICAR image processing software developed at JPL and available to users through the NASA sponsored COSMIC computer program distribution center is presented. Examples of subjective processing performed to improve the information display for the human observer, such as contrast enhancement, pseudocolor display and band rationing, and of quantitative processing using mathematical models, such as classification based on multispectral signatures of different areas within a given scene and geometric transformation of imagery into standard mapping projections are given. Examples are illustrated by Landsat scenes of the Andes mountains and Altyn-Tagh fault zone in China before and after contrast enhancement and classification of land use in Portland, Oregon. The VICAR image processing software system which consists of a language translator that simplifies execution of image processing programs and provides a general purpose format so that imagery from a variety of sources can be processed by the same basic set of general applications programs is described.
The Athena Pancam and Color Microscopic Imager (CMI)
NASA Technical Reports Server (NTRS)
Bell, J. F., III; Herkenhoff, K. E.; Schwochert, M.; Morris, R. V.; Sullivan, R.
2000-01-01
The Athena Mars rover payload includes two primary science-grade imagers: Pancam, a multispectral, stereo, panoramic camera system, and the Color Microscopic Imager (CMI), a multispectral and variable depth-of-field microscope. Both of these instruments will help to achieve the primary Athena science goals by providing information on the geology, mineralogy, and climate history of the landing site. In addition, Pancam provides important support for rover navigation and target selection for Athena in situ investigations. Here we describe the science goals, instrument designs, and instrument performance of the Pancam and CMI investigations.
Component pattern analysis of chemicals using multispectral THz imaging system
NASA Astrophysics Data System (ADS)
Kawase, Kodo; Ogawa, Yuichi; Watanabe, Yuki
2004-04-01
We have developed a novel basic technology for terahertz (THz) imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral transillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.
Highly Protable Airborne Multispectral Imaging System
NASA Technical Reports Server (NTRS)
Lehnemann, Robert; Mcnamee, Todd
2001-01-01
A portable instrumentation system is described that includes and airborne and a ground-based subsytem. It can acquire multispectral image data over swaths of terrain ranging in width from about 1.5 to 1 km. The system was developed especially for use in coastal environments and is well suited for performing remote sensing and general environmental monitoring. It includes a small,munpilotaed, remotely controlled airplance that carries a forward-looking camera for navigation, three downward-looking monochrome video cameras for imaging terrain in three spectral bands, a video transmitter, and a Global Positioning System (GPS) reciever.
NASA Technical Reports Server (NTRS)
Manohar, Mareboyana; Tilton, James C.
1994-01-01
A progressive vector quantization (VQ) compression approach is discussed which decomposes image data into a number of levels using full search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the Advanced Very High Resolution Radiometer instrument and other Earth observation image data, and investigate the trade-offs in selecting the number of decomposition levels and codebook training method.
Analysis of signal-dependent sensor noise on JPEG 2000-compressed Sentinel-2 multi-spectral images
NASA Astrophysics Data System (ADS)
Uss, M.; Vozel, B.; Lukin, V.; Chehdi, K.
2017-10-01
The processing chain of Sentinel-2 MultiSpectral Instrument (MSI) data involves filtering and compression stages that modify MSI sensor noise. As a result, noise in Sentinel-2 Level-1C data distributed to users becomes processed. We demonstrate that processed noise variance model is bivariate: noise variance depends on image intensity (caused by signal-dependency of photon counting detectors) and signal-to-noise ratio (SNR; caused by filtering/compression). To provide information on processed noise parameters, which is missing in Sentinel-2 metadata, we propose to use blind noise parameter estimation approach. Existing methods are restricted to univariate noise model. Therefore, we propose extension of existing vcNI+fBm blind noise parameter estimation method to multivariate noise model, mvcNI+fBm, and apply it to each band of Sentinel-2A data. Obtained results clearly demonstrate that noise variance is affected by filtering/compression for SNR less than about 15. Processed noise variance is reduced by a factor of 2 - 5 in homogeneous areas as compared to noise variance for high SNR values. Estimate of noise variance model parameters are provided for each Sentinel-2A band. Sentinel-2A MSI Level-1C noise models obtained in this paper could be useful for end users and researchers working in a variety of remote sensing applications.
Enhancing Spatial Resolution of Remotely Sensed Imagery Using Deep Learning
NASA Astrophysics Data System (ADS)
Beck, J. M.; Bridges, S.; Collins, C.; Rushing, J.; Graves, S. J.
2017-12-01
Researchers at the Information Technology and Systems Center at the University of Alabama in Huntsville are using Deep Learning with Convolutional Neural Networks (CNNs) to develop a method for enhancing the spatial resolutions of moderate resolution (10-60m) multispectral satellite imagery. This enhancement will effectively match the resolutions of imagery from multiple sensors to provide increased global temporal-spatial coverage for a variety of Earth science products. Our research is centered on using Deep Learning for automatically generating transformations for increasing the spatial resolution of remotely sensed images with different spatial, spectral, and temporal resolutions. One of the most important steps in using images from multiple sensors is to transform the different image layers into the same spatial resolution, preferably the highest spatial resolution, without compromising the spectral information. Recent advances in Deep Learning have shown that CNNs can be used to effectively and efficiently upscale or enhance the spatial resolution of multispectral images with the use of an auxiliary data source such as a high spatial resolution panchromatic image. In contrast, we are using both the spatial and spectral details inherent in low spatial resolution multispectral images for image enhancement without the use of a panchromatic image. This presentation will discuss how this technology will benefit many Earth Science applications that use remotely sensed images with moderate spatial resolutions.
Djiongo Kenfack, Cedrigue Boris; Monga, Olivier; Mpong, Serge Moto; Ndoundam, René
2018-03-01
Within the last decade, several approaches using quaternion numbers to handle and model multiband images in a holistic manner were introduced. The quaternion Fourier transform can be efficiently used to model texture in multidimensional data such as color images. For practical application, multispectral satellite data appear as a primary source for measuring past trends and monitoring changes in forest carbon stocks. In this work, we propose a texture-color descriptor based on the quaternion Fourier transform to extract relevant information from multiband satellite images. We propose a new multiband image texture model extraction, called FOTO++, in order to address biomass estimation issues. The first stage consists in removing noise from the multispectral data while preserving the edges of canopies. Afterward, color texture descriptors are extracted thanks to a discrete form of the quaternion Fourier transform, and finally the support vector regression method is used to deduce biomass estimation from texture indices. Our texture features are modeled using a vector composed with the radial spectrum coming from the amplitude of the quaternion Fourier transform. We conduct several experiments in order to study the sensitivity of our model to acquisition parameters. We also assess its performance both on synthetic images and on real multispectral images of Cameroonian forest. The results show that our model is more robust to acquisition parameters than the classical Fourier Texture Ordination model (FOTO). Our scheme is also more accurate for aboveground biomass estimation. We stress that a similar methodology could be implemented using quaternion wavelets. These results highlight the potential of the quaternion-based approach to study multispectral satellite images.
Multispectral laser-induced fluorescence imaging system for large biological samples
NASA Astrophysics Data System (ADS)
Kim, Moon S.; Lefcourt, Alan M.; Chen, Yud-Ren
2003-07-01
A laser-induced fluorescence imaging system developed to capture multispectral fluorescence emission images simultaneously from a relatively large target object is described. With an expanded, 355-nm Nd:YAG laser as the excitation source, the system captures fluorescence emission images in the blue, green, red, and far-red regions of the spectrum centered at 450, 550, 678, and 730 nm, respectively, from a 30-cm-diameter target area in ambient light. Images of apples and of pork meat artificially contaminated with diluted animal feces have demonstrated the versatility of fluorescence imaging techniques for potential applications in food safety inspection. Regions of contamination, including sites that were not readily visible to the human eye, could easily be identified from the images.
Detection of Subpixel Submerged Mine-Like Targets in WorldView-2 Multispectral Imagery
2012-09-01
and painted black, blue and green. The dot seen in the image by target three was a zodiac and it was only in the 21 March data set. 47 WorldView-2...region of interest (ROI) was created using band one of the covariance PCA image. The targets, buoy, and the zodiac were all considered targets. N...targets. Pixels that represented the zodiac were not segregated and found all over the visualization. For this reason, this process was followed by
NASA Astrophysics Data System (ADS)
Zhu, Y.; Jin, S.; Tian, Y.; Wang, M.
2017-09-01
To meet the requirement of high accuracy and high speed processing for wide swath high resolution optical satellite imagery under emergency situation in both ground processing system and on-board processing system. This paper proposed a ROI-orientated sensor correction algorithm based on virtual steady reimaging model for wide swath high resolution optical satellite imagery. Firstly, the imaging time and spatial window of the ROI is determined by a dynamic search method. Then, the dynamic ROI sensor correction model based on virtual steady reimaging model is constructed. Finally, the corrected image corresponding to the ROI is generated based on the coordinates mapping relationship which is established by the dynamic sensor correction model for corrected image and rigours imaging model for original image. Two experimental results show that the image registration between panchromatic and multispectral images can be well achieved and the image distortion caused by satellite jitter can be also corrected efficiently.
2011-03-01
electromagnetic spectrum. With the availability of multispectral and hyperspectral systems, both spatial and spectral information for a scene are...an image. The boundary conditions for NDGRI and NDSI are set from diffuse spectral reflectance values for the range of skin types determined in [28...wearing no standard uniform and blending into the urban population. To assist with enemy detection and tracking, imaging systems that acquire spectral
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-03-02
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.
Simple models for complex natural surfaces - A strategy for the hyperspectral era of remote sensing
NASA Technical Reports Server (NTRS)
Adams, John B.; Smith, Milton O.; Gillespie, Alan R.
1989-01-01
A two-step strategy for analyzing multispectral images is described. In the first step, the analyst decomposes the signal from each pixel (as expressed by the radiance or reflectance values in each channel) into components that are contributed by spectrally distinct materials on the ground, and those that are due to atmospheric effects, instrumental effects, and other factors, such as illumination. In the second step, the isolated signals from the materials on the ground are selectively edited, and recombined to form various unit maps that are interpretable within the framework of field units. The approach has been tested on multispectral images of a variety of natural land surfaces ranging from hyperarid deserts to tropical rain forests. Data were analyzed from Landsat MSS (multispectral scanner) and TM (Thematic Mapper), the airborne NS001 TM simulator, Viking Lander and Orbiter, AIS, and AVRIS (Airborne Visible and Infrared Imaging Spectrometer).
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-01-01
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703
Multichannel imager for littoral zone characterization
NASA Astrophysics Data System (ADS)
Podobna, Yuliya; Schoonmaker, Jon; Dirbas, Joe; Sofianos, James; Boucher, Cynthia; Gilbert, Gary
2010-04-01
This paper describes an approach to utilize a multi-channel, multi-spectral electro-optic (EO) system for littoral zone characterization. Advanced Coherent Technologies, LLC (ACT) presents their EO sensor systems for the surf zone environmental assessment and potential surf zone target detection. Specifically, an approach is presented to determine a Surf Zone Index (SZI) from the multi-spectral EO sensor system. SZI provides a single quantitative value of the surf zone conditions delivering an immediate understanding of the area and an assessment as to how well an airborne optical system might perform in a mine countermeasures (MCM) operation. Utilizing consecutive frames of SZI images, ACT is able to measure variability over time. A surf zone nomograph, which incorporates targets, sensor, and environmental data, including the SZI to determine the environmental impact on system performance, is reviewed in this work. ACT's electro-optical multi-channel, multi-spectral imaging system and test results are presented and discussed.
Analysis of Multispectral Galileo SSI Images of the Conamara Chaos Region, Europa
NASA Technical Reports Server (NTRS)
Spaun, N. A.; Phillips, C. B.
2003-01-01
Multispectral imaging of Europa s surface by Galileo s Solid State Imaging (SSI) camera has revealed two major surface color units, which appear as white and red-brown regions in enhanced color images of the surface (see figure). The Galileo Near- Infrared Mapping Spectrometer (NIMS) experiment suggests that the whitish material is icy, almost pure water ice, while the spectral signatures of the reddish regions are dominated by a non-ice material. Two endmember models have been proposed for the composition of the non-ice material: magnesium sulfate hydrates [1] and sulfuric acid and its byproducts [2]. There is also debate concerning whether the origin of this non-ice material is exogenic or endogenic [3].Goals: The key questions this work addresses are: 1) Is the non-ice material exogenic or endogenic in origin? 2) Once emplaced, is this non-ice material primarily modified by exogenic or endogenic processes? 3) Is the non-ice material within ridges, bands, chaos, and lenticulae the same non-ice material across all such geological features? 4) Does the distribution of the non-ice material provide any evidence for or against any of the various models for feature formation? 5) To what extent do the effects of scattered light in SSI images change the spectral signatures of geological features?
An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images
Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed
2018-01-01
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang’s method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used. PMID:29762519
Removing sun glint from optical remote sensing images of shallow rivers
Overstreet, Brandon T.; Legleiter, Carl
2017-01-01
Sun glint is the specular reflection of light from the water surface, which often causes unusually bright pixel values that can dominate fluvial remote sensing imagery and obscure the water-leaving radiance signal of interest for mapping bathymetry, bottom type, or water column optical characteristics. Although sun glint is ubiquitous in fluvial remote sensing imagery, river-specific methods for removing sun glint are not yet available. We show that existing sun glint-removal methods developed for multispectral images of marine shallow water environments over-correct shallow portions of fluvial remote sensing imagery resulting in regions of unreliable data along channel margins. We build on existing marine glint-removal methods to develop a river-specific technique that removes sun glint from shallow areas of the channel without overcorrection by accounting for non-negligible water-leaving near-infrared radiance. This new sun glint-removal method can improve the accuracy of spectrally-based depth retrieval in cases where sun glint dominates the at-sensor radiance. For an example image of the gravel-bed Snake River, Wyoming, USA, observed-vs.-predicted R2 values for depth retrieval improved from 0.66 to 0.76 following sun glint removal. The methodology presented here is straightforward to implement and could be incorporated into image processing workflows for multispectral images that include a near-infrared band.
CART V: recent advancements in computer-aided camouflage assessment
NASA Astrophysics Data System (ADS)
Müller, Thomas; Müller, Markus
2011-05-01
In order to facilitate systematic, computer aided improvements of camouflage and concealment assessment methods, the software system CART (Camouflage Assessment in Real-Time) was built up for the camouflage assessment of objects in multispectral image sequences (see contributions to SPIE 2007-2010 [1], [2], [3], [4]). It comprises a semi-automatic marking of target objects (ground truth generation) including their propagation over the image sequence and the evaluation via user-defined feature extractors as well as methods to assess the object's movement conspicuity. In this fifth part in an annual series at the SPIE conference in Orlando, this paper presents the enhancements over the recent year and addresses the camouflage assessment of static and moving objects in multispectral image data that can show noise or image artefacts. The presented methods fathom the correlations between image processing and camouflage assessment. A novel algorithm is presented based on template matching to assess the structural inconspicuity of an object objectively and quantitatively. The results can easily be combined with an MTI (moving target indication) based movement conspicuity assessment function in order to explore the influence of object movement to a camouflage effect in different environments. As the results show, the presented methods contribute to a significant benefit in the field of camouflage assessment.
NASA Astrophysics Data System (ADS)
Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz
2017-04-01
This paper proposes three multisharpening approaches to enhance the spatial resolution of urban hyperspectral remote sensing images. These approaches, related to linear-quadratic spectral unmixing techniques, use a linear-quadratic nonnegative matrix factorization (NMF) multiplicative algorithm. These methods begin by unmixing the observable high-spectral/low-spatial resolution hyperspectral and high-spatial/low-spectral resolution multispectral images. The obtained high-spectral/high-spatial resolution features are then recombined, according to the linear-quadratic mixing model, to obtain an unobservable multisharpened high-spectral/high-spatial resolution hyperspectral image. In the first designed approach, hyperspectral and multispectral variables are independently optimized, once they have been coherently initialized. These variables are alternately updated in the second designed approach. In the third approach, the considered hyperspectral and multispectral variables are jointly updated. Experiments, using synthetic and real data, are conducted to assess the efficiency, in spatial and spectral domains, of the designed approaches and of linear NMF-based approaches from the literature. Experimental results show that the designed methods globally yield very satisfactory spectral and spatial fidelities for the multisharpened hyperspectral data. They also prove that these methods significantly outperform the used literature approaches.
NASA Astrophysics Data System (ADS)
Jun, Won; Kim, Moon S.; Chao, Kaunglin; Lefcourt, Alan M.; Roberts, Michael S.; McNaughton, James L.
2009-05-01
We used a portable hyperspectral fluorescence imaging system to evaluate biofilm formations on four types of food processing surface materials including stainless steel, polypropylene used for cutting boards, and household counter top materials such as formica and granite. The objective of this investigation was to determine a minimal number of spectral bands suitable to differentiate microbial biofilm formation from the four background materials typically used during food processing. Ultimately, the resultant spectral information will be used in development of handheld portable imaging devices that can be used as visual aid tools for sanitation and safety inspection (microbial contamination) of the food processing surfaces. Pathogenic E. coli O157:H7 and Salmonella cells were grown in low strength M9 minimal medium on various surfaces at 22 +/- 2 °C for 2 days for biofilm formation. Biofilm autofluorescence under UV excitation (320 to 400 nm) obtained by hyperspectral fluorescence imaging system showed broad emissions in the blue-green regions of the spectrum with emission maxima at approximately 480 nm for both E. coli O157:H7 and Salmonella biofilms. Fluorescence images at 480 nm revealed that for background materials with near-uniform fluorescence responses such as stainless steel and formica cutting board, regardless of the background intensity, biofilm formation can be distinguished. This suggested that a broad spectral band in the blue-green regions can be used for handheld imaging devices for sanitation inspection of stainless, cutting board, and formica surfaces. The non-uniform fluorescence responses of granite make distinctions between biofilm and background difficult. To further investigate potential detection of the biofilm formations on granite surfaces with multispectral approaches, principal component analysis (PCA) was performed using the hyperspectral fluorescence image data. The resultant PCA score images revealed distinct contrast between biofilms and granite surfaces. This investigation demonstrated that biofilm formations on food processing surfaces, even for background materials with heterogeneous fluorescence responses, can be detected. Furthermore, a multispectral approach in developing handheld inspection devices may be needed to inspect surface materials that exhibit non-uniform fluorescence.
NASA Astrophysics Data System (ADS)
Kingfield, D.; de Beurs, K.
2014-12-01
It has been demonstrated through various case studies that multispectral satellite imagery can be utilized in the identification of damage caused by a tornado through the change detection process. This process involves the difference in returned surface reflectance between two images and is often summarized through a variety of ratio-based vegetation indices (VIs). Land cover type plays a large contributing role in the change detection process as the reflectance properties of vegetation can vary based on several factors (e.g. species, greenness, density). Consequently, this provides the possibility for a variable magnitude of loss, making certain land cover regimes less reliable in the damage identification process. Furthermore, the tradeoff between sensor resolution and orbital return period may also play a role in the ability to detect catastrophic loss. Moderate resolution imagery (e.g. Moderate Resolution Imaging Spectroradiometer (MODIS)) provides relatively coarse surface detail with a higher update rate which could hinder the identification of small regions that underwent a dynamic change. Alternatively, imagery with higher spatial resolution (e.g. Landsat) have a longer temporal return period between successive images which could result in natural recovery underestimating the absolute magnitude of damage incurred. This study evaluates the role of land cover type and sensor resolution on four high-end (EF3+) tornado events occurring in four different land cover groups (agriculture, forest, grassland, urban) in the spring season. The closest successive clear images from both Landsat 5 and MODIS are quality controlled for each case. Transacts of surface reflectance across a homogenous land cover type both inside and outside the damage swath are extracted. These metrics are synthesized through the calculation of six different VIs to rank the calculated change metrics by land cover type, sensor resolution and VI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Virnstein, R.; Tepera, M.; Beazley, L.
1997-06-01
A pilot study is very briefly summarized in the article. The study tested the potential of multi-spectral digital imagery for discrimination of seagrass densities and species, algae, and bottom types. Imagery was obtained with the Compact Airborne Spectral Imager (casi) and two flight lines flown with hyper-spectral mode. The photogrammetric method used allowed interpretation of the highest quality product, eliminating limitations caused by outdated or poor quality base maps and the errors associated with transfer of polygons. Initial image analysis indicates that the multi-spectral imagery has several advantages, including sophisticated spectral signature recognition and classification, ease of geo-referencing, and rapidmore » mosaicking.« less
Post-launch validation of Multispectral Thermal Imager (MTI) data and algorithms
NASA Astrophysics Data System (ADS)
Garrett, Alfred J.; Kurzeja, Robert J.; O'Steen, B. L.; Parker, Matthew J.; Pendergast, Malcolm M.; Villa-Aleman, Eliel
1999-10-01
Sandia National Laboratories (SNL), Los Alamos National Laboratory (LANL) and the Savannah River Technology Center (SRTC) have developed a diverse group of algorithms for processing and analyzing the data that will be collected by the Multispectral Thermal Imager (MTI) after launch late in 1999. Each of these algorithms must be verified by comparison to independent surface and atmospheric measurements. SRTC has selected 13 sites in the continental U.S. for ground truth data collections. These sites include a high altitude cold water target (Crater Lake), cooling lakes and towers in the warm, humid southeastern U.S., Department of Energy (DOE) climate research sites, the NASA Stennis satellite Validation and Verification (V&V) target array, waste sites at the Savannah River Site, mining sites in the Four Corners area and dry lake beds in Nevada. SRTC has established mutually beneficial relationships with the organizations that manage these sites to make use of their operating and research data and to install additional instrumentation needed for MTI algorithm V&V.
Snapshot spectral and polarimetric imaging; target identification with multispectral video
NASA Astrophysics Data System (ADS)
Bartlett, Brent D.; Rodriguez, Mikel D.
2013-05-01
As the number of pixels continue to grow in consumer and scientific imaging devices, it has become feasible to collect the incident light field. In this paper, an imaging device developed around light field imaging is used to collect multispectral and polarimetric imagery in a snapshot fashion. The sensor is described and a video data set is shown highlighting the advantage of snapshot spectral imaging. Several novel computer vision approaches are applied to the video cubes to perform scene characterization and target identification. It is shown how the addition of spectral and polarimetric data to the video stream allows for multi-target identification and tracking not possible with traditional RGB video collection.
NASA Astrophysics Data System (ADS)
Thompson, Nicholas Allan
2013-06-01
With recent developments in multispectral detector technology, the interest in common aperture, common focal plane multispectral imaging systems is increasing. Such systems are particularly desirable for military applications, where increased levels of target discrimination and identification are required in cost-effective, rugged, lightweight systems. During the optical design of dual waveband or multispectral systems, the options for material selection are limited. This selection becomes even more restrictive for military applications, where material resilience, thermal properties, and color correction must be considered. We discuss the design challenges that lightweight multispectral common aperture systems present, along with some potential design solutions. Consideration is given to material selection for optimum color correction, as well as material resilience and thermal correction. This discussion is supported using design examples currently in development at Qioptiq.
Auroral Observations from the POLAR Ultraviolet Imager (UVI)
NASA Technical Reports Server (NTRS)
Germany, G. A.; Spann, J. F.; Parks, G. K.; Brittnacher, M. J.; Elsen, R.; Chen, L.; Lummerzheim, D.; Rees, M. H.
1998-01-01
Because of the importance of the auroral regions as a remote diagnostic of near-Earth plasma processes and magnetospheric structure, spacebased instrumentation for imaging the auroral regions have been designed and operated for the last twenty-five years. The latest generation of imagers, including those flown on the POLAR satellite, extends this quest for multispectral resolution by providing three separate imagers for the visible, ultraviolet, and X ray images of the aurora. The ability to observe extended regions allows imaging missions to significantly extend the observations available from in situ or groundbased instrumentation. The complementary nature of imaging and other observations is illustrated below using results from tile GGS Ultraviolet Imager (UVI). Details of the requisite energy and intensity analysis are also presented.
Comparing Individual Tree Segmentation Based on High Resolution Multispectral Image and Lidar Data
NASA Astrophysics Data System (ADS)
Xiao, P.; Kelly, M.; Guo, Q.
2014-12-01
This study compares the use of high-resolution multispectral WorldView images and high density Lidar data for individual tree segmentation. The application focuses on coniferous and deciduous forests in the Sierra Nevada Mountains. The tree objects are obtained in two ways: a hybrid region-merging segmentation method with multispectral images, and a top-down and bottom-up region-growing method with Lidar data. The hybrid region-merging method is used to segment individual tree from multispectral images. It integrates the advantages of global-oriented and local-oriented region-merging strategies into a unified framework. The globally most-similar pair of regions is used to determine the starting point of a growing region. The merging iterations are constrained within the local vicinity, thus the segmentation is accelerated and can reflect the local context. The top-down region-growing method is adopted in coniferous forest to delineate individual tree from Lidar data. It exploits the spacing between the tops of trees to identify and group points into a single tree based on simple rules of proximity and likely tree shape. The bottom-up region-growing method based on the intensity and 3D structure of Lidar data is applied in deciduous forest. It segments tree trunks based on the intensity and topological relationships of the points, and then allocate other points to exact tree crowns according to distance. The accuracies for each method are evaluated with field survey data in several test sites, covering dense and sparse canopy. Three types of segmentation results are produced: true positive represents a correctly segmented individual tree, false negative represents a tree that is not detected and assigned to a nearby tree, and false positive represents that a point or pixel cluster is segmented as a tree that does not in fact exist. They respectively represent correct-, under-, and over-segmentation. Three types of index are compared for segmenting individual tree from multispectral image and Lidar data: recall, precision and F-score. This work explores the tradeoff between the expensive Lidar data and inexpensive multispectral image. The conclusion will guide the optimal data selection in different density canopy areas for individual tree segmentation, and contribute to the field of forest remote sensing.
An earth imaging camera simulation using wide-scale construction of reflectance surfaces
NASA Astrophysics Data System (ADS)
Murthy, Kiran; Chau, Alexandra H.; Amin, Minesh B.; Robinson, M. Dirk
2013-10-01
Developing and testing advanced ground-based image processing systems for earth-observing remote sensing applications presents a unique challenge that requires advanced imagery simulation capabilities. This paper presents an earth-imaging multispectral framing camera simulation system called PayloadSim (PaySim) capable of generating terabytes of photorealistic simulated imagery. PaySim leverages previous work in 3-D scene-based image simulation, adding a novel method for automatically and efficiently constructing 3-D reflectance scenes by draping tiled orthorectified imagery over a geo-registered Digital Elevation Map (DEM). PaySim's modeling chain is presented in detail, with emphasis given to the techniques used to achieve computational efficiency. These techniques as well as cluster deployment of the simulator have enabled tuning and robust testing of image processing algorithms, and production of realistic sample data for customer-driven image product development. Examples of simulated imagery of Skybox's first imaging satellite are shown.
Lossless compression algorithm for multispectral imagers
NASA Astrophysics Data System (ADS)
Gladkova, Irina; Grossberg, Michael; Gottipati, Srikanth
2008-08-01
Multispectral imaging is becoming an increasingly important tool for monitoring the earth and its environment from space borne and airborne platforms. Multispectral imaging data consists of visible and IR measurements from a scene across space and spectrum. Growing data rates resulting from faster scanning and finer spatial and spectral resolution makes compression an increasingly critical tool to reduce data volume for transmission and archiving. Research for NOAA NESDIS has been directed to finding for the characteristics of satellite atmospheric Earth science Imager sensor data what level of Lossless compression ratio can be obtained as well as appropriate types of mathematics and approaches that can lead to approaching this data's entropy level. Conventional lossless do not achieve the theoretical limits for lossless compression on imager data as estimated from the Shannon entropy. In a previous paper, the authors introduce a lossless compression algorithm developed for MODIS as a proxy for future NOAA-NESDIS satellite based Earth science multispectral imagers such as GOES-R. The algorithm is based on capturing spectral correlations using spectral prediction, and spatial correlations with a linear transform encoder. In decompression, the algorithm uses a statistically computed look up table to iteratively predict each channel from a channel decompressed in the previous iteration. In this paper we present a new approach which fundamentally differs from our prior work. In this new approach, instead of having a single predictor for each pair of bands we introduce a piecewise spatially varying predictor which significantly improves the compression results. Our new algorithm also now optimizes the sequence of channels we use for prediction. Our results are evaluated by comparison with a state of the art wavelet based image compression scheme, Jpeg2000. We present results on the 14 channel subset of the MODIS imager, which serves as a proxy for the GOES-R imager. We will also show results of the algorithm for on NOAA AVHRR data and data from SEVIRI. The algorithm is designed to be adapted to the wide range of multispectral imagers and should facilitate distribution of data throughout globally. This compression research is managed by Roger Heymann, PE of OSD NOAA NESDIS Engineering, in collaboration with the NOAA NESDIS STAR Research Office through Mitch Goldberg, Tim Schmit, Walter Wolf.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Getman, Daniel J
2008-01-01
Many attempts to observe changes in terrestrial systems over time would be significantly enhanced if it were possible to improve the accuracy of classifications of low-resolution historic satellite data. In an effort to examine improving the accuracy of historic satellite image classification by combining satellite and air photo data, two experiments were undertaken in which low-resolution multispectral data and high-resolution panchromatic data were combined and then classified using the ECHO spectral-spatial image classification algorithm and the Maximum Likelihood technique. The multispectral data consisted of 6 multispectral channels (30-meter pixel resolution) from Landsat 7. These data were augmented with panchromatic datamore » (15m pixel resolution) from Landsat 7 in the first experiment, and with a mosaic of digital aerial photography (1m pixel resolution) in the second. The addition of the Landsat 7 panchromatic data provided a significant improvement in the accuracy of classifications made using the ECHO algorithm. Although the inclusion of aerial photography provided an improvement in accuracy, this improvement was only statistically significant at a 40-60% level. These results suggest that once error levels associated with combining aerial photography and multispectral satellite data are reduced, this approach has the potential to significantly enhance the precision and accuracy of classifications made using historic remotely sensed data, as a way to extend the time range of efforts to track temporal changes in terrestrial systems.« less
Multi sensor satellite imagers for commercial remote sensing
NASA Astrophysics Data System (ADS)
Cronje, T.; Burger, H.; Du Plessis, J.; Du Toit, J. F.; Marais, L.; Strumpfer, F.
2005-10-01
This paper will discuss and compare recent refractive and catodioptric imager designs developed and manufactured at SunSpace for Multi Sensor Satellite Imagers with Panchromatic, Multi-spectral, Area and Hyperspectral sensors on a single Focal Plane Array (FPA). These satellite optical systems were designed with applications to monitor food supplies, crop yield and disaster monitoring in mind. The aim of these imagers is to achieve medium to high resolution (2.5m to 15m) spatial sampling, wide swaths (up to 45km) and noise equivalent reflectance (NER) values of less than 0.5%. State-of-the-art FPA designs are discussed and address the choice of detectors to achieve these performances. Special attention is given to thermal robustness and compactness, the use of folding prisms to place multiple detectors in a large FPA and a specially developed process to customize the spectral selection with the need to minimize mass, power and cost. A refractive imager with up to 6 spectral bands (6.25m GSD) and a catodioptric imager with panchromatic (2.7m GSD), multi-spectral (6 bands, 4.6m GSD), hyperspectral (400nm to 2.35μm, 200 bands, 15m GSD) sensors on the same FPA will be discussed. Both of these imagers are also equipped with real time video view finding capabilities. The electronic units could be subdivided into the Front-End Electronics and Control Electronics with analogue and digital signal processing. A dedicated Analogue Front-End is used for Correlated Double Sampling (CDS), black level correction, variable gain and up to 12-bit digitizing and high speed LVDS data link to a mass memory unit.