Nanohole-array-based device for 2D snapshot multispectral imaging
Najiminaini, Mohamadreza; Vasefi, Fartash; Kaminska, Bozena; Carson, Jeffrey J. L.
2013-01-01
We present a two-dimensional (2D) snapshot multispectral imager that utilizes the optical transmission characteristics of nanohole arrays (NHAs) in a gold film to resolve a mixture of input colors into multiple spectral bands. The multispectral device consists of blocks of NHAs, wherein each NHA has a unique periodicity that results in transmission resonances and minima in the visible and near-infrared regions. The multispectral device was illuminated over a wide spectral range, and the transmission was spectrally unmixed using a least-squares estimation algorithm. A NHA-based multispectral imaging system was built and tested in both reflection and transmission modes. The NHA-based multispectral imager was capable of extracting 2D multispectral images representative of four independent bands within the spectral range of 662 nm to 832 nm for a variety of targets. The multispectral device can potentially be integrated into a variety of imaging sensor systems. PMID:24005065
Smartphone-based multispectral imaging: system development and potential for mobile skin diagnosis.
Kim, Sewoong; Cho, Dongrae; Kim, Jihun; Kim, Manjae; Youn, Sangyeon; Jang, Jae Eun; Je, Minkyu; Lee, Dong Hun; Lee, Boreom; Farkas, Daniel L; Hwang, Jae Youn
2016-12-01
We investigate the potential of mobile smartphone-based multispectral imaging for the quantitative diagnosis and management of skin lesions. Recently, various mobile devices such as a smartphone have emerged as healthcare tools. They have been applied for the early diagnosis of nonmalignant and malignant skin diseases. Particularly, when they are combined with an advanced optical imaging technique such as multispectral imaging and analysis, it would be beneficial for the early diagnosis of such skin diseases and for further quantitative prognosis monitoring after treatment at home. Thus, we demonstrate here the development of a smartphone-based multispectral imaging system with high portability and its potential for mobile skin diagnosis. The results suggest that smartphone-based multispectral imaging and analysis has great potential as a healthcare tool for quantitative mobile skin diagnosis.
Smartphone-based multispectral imaging: system development and potential for mobile skin diagnosis
Kim, Sewoong; Cho, Dongrae; Kim, Jihun; Kim, Manjae; Youn, Sangyeon; Jang, Jae Eun; Je, Minkyu; Lee, Dong Hun; Lee, Boreom; Farkas, Daniel L.; Hwang, Jae Youn
2016-01-01
We investigate the potential of mobile smartphone-based multispectral imaging for the quantitative diagnosis and management of skin lesions. Recently, various mobile devices such as a smartphone have emerged as healthcare tools. They have been applied for the early diagnosis of nonmalignant and malignant skin diseases. Particularly, when they are combined with an advanced optical imaging technique such as multispectral imaging and analysis, it would be beneficial for the early diagnosis of such skin diseases and for further quantitative prognosis monitoring after treatment at home. Thus, we demonstrate here the development of a smartphone-based multispectral imaging system with high portability and its potential for mobile skin diagnosis. The results suggest that smartphone-based multispectral imaging and analysis has great potential as a healthcare tool for quantitative mobile skin diagnosis. PMID:28018743
An integrated compact airborne multispectral imaging system using embedded computer
NASA Astrophysics Data System (ADS)
Zhang, Yuedong; Wang, Li; Zhang, Xuguo
2015-08-01
An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.
[Detecting fire smoke based on the multispectral image].
Wei, Ying-Zhuo; Zhang, Shao-Wu; Liu, Yan-Wei
2010-04-01
Smoke detection is very important for preventing forest-fire in the fire early process. Because the traditional technologies based on video and image processing are easily affected by the background dynamic information, three limitations exist in these technologies, i. e. lower anti-interference ability, higher false detection rate and the fire smoke and water fog being not easily distinguished. A novel detection method for detecting smoke based on the multispectral image was proposed in the present paper. Using the multispectral digital imaging technique, the multispectral image series of fire smoke and water fog were obtained in the band scope of 400 to 720 nm, and the images were divided into bins. The Euclidian distance among the bins was taken as a measurement for showing the difference of spectrogram. After obtaining the spectral feature vectors of dynamic region, the regions of fire smoke and water fog were extracted according to the spectrogram feature difference between target and background. The indoor and outdoor experiments show that the smoke detection method based on multispectral image can be applied to the smoke detection, which can effectively distinguish the fire smoke and water fog. Combined with video image processing method, the multispectral image detection method can also be applied to the forest fire surveillance, reducing the false alarm rate in forest fire detection.
Multispectral Image Compression Based on DSC Combined with CCSDS-IDC
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741
Multispectral image compression based on DSC combined with CCSDS-IDC.
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.
NASA Astrophysics Data System (ADS)
Mansoor, Awais; Robinson, J. Paul; Rajwa, Bartek
2009-02-01
Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in a compressed format. A considerable number of techniques based on interband and intraband redundancies of multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal data. However, these works have been carried out mostly in the elds of remote sensing and video processing. Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal performance of 2D compression techniques in compressing 3D images. Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image. The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands at the same level of decomposition. The insignicant quadtrees in dierent subbands in the high-frequency subband class are coded by a combined function to reduce redundancy. A number of experiments conducted on microscopic multispectral images have shown promising results for the proposed method over current state-of-the-art image-compression techniques.
Tissue classification for laparoscopic image understanding based on multispectral texture analysis
NASA Astrophysics Data System (ADS)
Zhang, Yan; Wirkert, Sebastian J.; Iszatt, Justin; Kenngott, Hannes; Wagner, Martin; Mayer, Benjamin; Stock, Christian; Clancy, Neil T.; Elson, Daniel S.; Maier-Hein, Lena
2016-03-01
Intra-operative tissue classification is one of the prerequisites for providing context-aware visualization in computer-assisted minimally invasive surgeries. As many anatomical structures are difficult to differentiate in conventional RGB medical images, we propose a classification method based on multispectral image patches. In a comprehensive ex vivo study we show (1) that multispectral imaging data is superior to RGB data for organ tissue classification when used in conjunction with widely applied feature descriptors and (2) that combining the tissue texture with the reflectance spectrum improves the classification performance. Multispectral tissue analysis could thus evolve as a key enabling technique in computer-assisted laparoscopy.
Fast Lossless Compression of Multispectral-Image Data
NASA Technical Reports Server (NTRS)
Klimesh, Matthew
2006-01-01
An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.
Multispectral imaging for biometrics
NASA Astrophysics Data System (ADS)
Rowe, Robert K.; Corcoran, Stephen P.; Nixon, Kristin A.; Ostrom, Robert E.
2005-03-01
Automated identification systems based on fingerprint images are subject to two significant types of error: an incorrect decision about the identity of a person due to a poor quality fingerprint image and incorrectly accepting a fingerprint image generated from an artificial sample or altered finger. This paper discusses the use of multispectral sensing as a means to collect additional information about a finger that significantly augments the information collected using a conventional fingerprint imager based on total internal reflectance. In the context of this paper, "multispectral sensing" is used broadly to denote a collection of images taken under different polarization conditions and illumination configurations, as well as using multiple wavelengths. Background information is provided on conventional fingerprint imaging. A multispectral imager for fingerprint imaging is then described and a means to combine the two imaging systems into a single unit is discussed. Results from an early-stage prototype of such a system are shown.
NASA Astrophysics Data System (ADS)
Saager, Rolf B.; Baldado, Melissa L.; Rowland, Rebecca A.; Kelly, Kristen M.; Durkin, Anthony J.
2018-04-01
With recent proliferation in compact and/or low-cost clinical multispectral imaging approaches and commercially available components, questions remain whether they adequately capture the requisite spectral content of their applications. We present a method to emulate the spectral range and resolution of a variety of multispectral imagers, based on in-vivo data acquired from spatial frequency domain spectroscopy (SFDS). This approach simulates spectral responses over 400 to 1100 nm. Comparing emulated data with full SFDS spectra of in-vivo tissue affords the opportunity to evaluate whether the sparse spectral content of these imagers can (1) account for all sources of optical contrast present (completeness) and (2) robustly separate and quantify sources of optical contrast (crosstalk). We validate the approach over a range of tissue-simulating phantoms, comparing the SFDS-based emulated spectra against measurements from an independently characterized multispectral imager. Emulated results match the imager across all phantoms (<3 % absorption, <1 % reduced scattering). In-vivo test cases (burn wounds and photoaging) illustrate how SFDS can be used to evaluate different multispectral imagers. This approach provides an in-vivo measurement method to evaluate the performance of multispectral imagers specific to their targeted clinical applications and can assist in the design and optimization of new spectral imaging devices.
NASA Technical Reports Server (NTRS)
Blonksi, Slawomir; Gasser, Gerald; Russell, Jeffrey; Ryan, Robert; Terrie, Greg; Zanoni, Vicki
2001-01-01
Multispectral data requirements for Earth science applications are not always studied rigorously studied before a new remote sensing system is designed. A study of the spatial resolution, spectral bandpasses, and radiometric sensitivity requirements of real-world applications would focus the design onto providing maximum benefits to the end-user community. To support systematic studies of multispectral data requirements, the Applications Research Toolbox (ART) has been developed at NASA's Stennis Space Center. The ART software allows users to create and assess simulated datasets while varying a wide range of system parameters. The simulations are based on data acquired by existing multispectral and hyperspectral instruments. The produced datasets can be further evaluated for specific end-user applications. Spectral synthesis of multispectral images from hyperspectral data is a key part of the ART software. In this process, hyperspectral image cubes are transformed into multispectral imagery without changes in spatial sampling and resolution. The transformation algorithm takes into account spectral responses of both the synthesized, broad, multispectral bands and the utilized, narrow, hyperspectral bands. To validate the spectral synthesis algorithm, simulated multispectral images are compared with images collected near-coincidentally by the Landsat 7 ETM+ and the EO-1 ALI instruments. Hyperspectral images acquired with the airborne AVIRIS instrument and with the Hyperion instrument onboard the EO-1 satellite were used as input data to the presented simulations.
3D tensor-based blind multispectral image decomposition for tumor demarcation
NASA Astrophysics Data System (ADS)
Kopriva, Ivica; Peršin, Antun
2010-03-01
Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).
Li, Hanlun; Zhang, Aiwu; Hu, Shaoxing
2015-01-01
This paper describes an airborne high resolution four-camera multispectral system which mainly consists of four identical monochrome cameras equipped with four interchangeable bandpass filters. For this multispectral system, an automatic multispectral data composing method was proposed. The homography registration model was chosen, and the scale-invariant feature transform (SIFT) and random sample consensus (RANSAC) were used to generate matching points. For the difficult registration problem between visible band images and near-infrared band images in cases lacking manmade objects, we presented an effective method based on the structural characteristics of the system. Experiments show that our method can acquire high quality multispectral images and the band-to-band alignment error of the composed multiple spectral images is less than 2.5 pixels. PMID:26205264
Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong
2016-07-01
Computer based diagnosis of Alzheimer's disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer's disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhateja, Vikrant, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn; Moin, Aisha; Srivastava, Anuja
Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Componentmore » Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).« less
Multispectral image fusion for target detection
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-09-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
Intelligent multi-spectral IR image segmentation
NASA Astrophysics Data System (ADS)
Lu, Thomas; Luong, Andrew; Heim, Stephen; Patel, Maharshi; Chen, Kang; Chao, Tien-Hsin; Chow, Edward; Torres, Gilbert
2017-05-01
This article presents a neural network based multi-spectral image segmentation method. A neural network is trained on the selected features of both the objects and background in the longwave (LW) Infrared (IR) images. Multiple iterations of training are performed until the accuracy of the segmentation reaches satisfactory level. The segmentation boundary of the LW image is used to segment the midwave (MW) and shortwave (SW) IR images. A second neural network detects the local discontinuities and refines the accuracy of the local boundaries. This article compares the neural network based segmentation method to the Wavelet-threshold and Grab-Cut methods. Test results have shown increased accuracy and robustness of this segmentation scheme for multi-spectral IR images.
An improved feature extraction algorithm based on KAZE for multi-spectral image
NASA Astrophysics Data System (ADS)
Yang, Jianping; Li, Jun
2018-02-01
Multi-spectral image contains abundant spectral information, which is widely used in all fields like resource exploration, meteorological observation and modern military. Image preprocessing, such as image feature extraction and matching, is indispensable while dealing with multi-spectral remote sensing image. Although the feature matching algorithm based on linear scale such as SIFT and SURF performs strong on robustness, the local accuracy cannot be guaranteed. Therefore, this paper proposes an improved KAZE algorithm, which is based on nonlinear scale, to raise the number of feature and to enhance the matching rate by using the adjusted-cosine vector. The experiment result shows that the number of feature and the matching rate of the improved KAZE are remarkably than the original KAZE algorithm.
Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking †
Kiku, Daisuke; Okutomi, Masatoshi
2017-01-01
Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking. PMID:29194407
Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking.
Monno, Yusuke; Kiku, Daisuke; Tanaka, Masayuki; Okutomi, Masatoshi
2017-12-01
Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking.
[A spatial adaptive algorithm for endmember extraction on multispectral remote sensing image].
Zhu, Chang-Ming; Luo, Jian-Cheng; Shen, Zhan-Feng; Li, Jun-Li; Hu, Xiao-Dong
2011-10-01
Due to the problem that the convex cone analysis (CCA) method can only extract limited endmember in multispectral imagery, this paper proposed a new endmember extraction method by spatial adaptive spectral feature analysis in multispectral remote sensing image based on spatial clustering and imagery slice. Firstly, in order to remove spatial and spectral redundancies, the principal component analysis (PCA) algorithm was used for lowering the dimensions of the multispectral data. Secondly, iterative self-organizing data analysis technology algorithm (ISODATA) was used for image cluster through the similarity of the pixel spectral. And then, through clustering post process and litter clusters combination, we divided the whole image data into several blocks (tiles). Lastly, according to the complexity of image blocks' landscape and the feature of the scatter diagrams analysis, the authors can determine the number of endmembers. Then using hourglass algorithm extracts endmembers. Through the endmember extraction experiment on TM multispectral imagery, the experiment result showed that the method can extract endmember spectra form multispectral imagery effectively. What's more, the method resolved the problem of the amount of endmember limitation and improved accuracy of the endmember extraction. The method has provided a new way for multispectral image endmember extraction.
Hu, J H; Wang, Y; Cahill, P T
1997-01-01
This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.
Spatial arrangement of color filter array for multispectral image acquisition
NASA Astrophysics Data System (ADS)
Shrestha, Raju; Hardeberg, Jon Y.; Khan, Rahat
2011-03-01
In the past few years there has been a significant volume of research work carried out in the field of multispectral image acquisition. The focus of most of these has been to facilitate a type of multispectral image acquisition systems that usually requires multiple subsequent shots (e.g. systems based on filter wheels, liquid crystal tunable filters, or active lighting). Recently, an alternative approach for one-shot multispectral image acquisition has been proposed; based on an extension of the color filter array (CFA) standard to produce more than three channels. We can thus introduce the concept of multispectral color filter array (MCFA). But this field has not been much explored, particularly little focus has been given in developing systems which focuses on the reconstruction of scene spectral reflectance. In this paper, we have explored how the spatial arrangement of multispectral color filter array affects the acquisition accuracy with the construction of MCFAs of different sizes. We have simulated acquisitions of several spectral scenes using different number of filters/channels, and compared the results with those obtained by the conventional regular MCFA arrangement, evaluating the precision of the reconstructed scene spectral reflectance in terms of spectral RMS error, and colorimetric ▵E*ab color differences. It has been found that the precision and the the quality of the reconstructed images are significantly influenced by the spatial arrangement of the MCFA and the effect will be more and more prominent with the increase in the number of channels. We believe that MCFA-based systems can be a viable alternative for affordable acquisition of multispectral color images, in particular for applications where spatial resolution can be traded off for spectral resolution. We have shown that the spatial arrangement of the array is an important design issue.
Multispectral image fusion for illumination-invariant palmprint recognition
Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied. PMID:28558064
Multispectral image fusion for illumination-invariant palmprint recognition.
Lu, Longbin; Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied.
Tissues segmentation based on multi spectral medical images
NASA Astrophysics Data System (ADS)
Li, Ya; Wang, Ying
2017-11-01
Each band image contains the most obvious tissue feature according to the optical characteristics of different tissues in different specific bands for multispectral medical images. In this paper, the tissues were segmented by their spectral information at each multispectral medical images. Four Local Binary Patter descriptors were constructed to extract blood vessels based on the gray difference between the blood vessels and their neighbors. The segmented tissue in each band image was merged to a clear image.
Acquisition performance of LAPAN-A3/IPB multispectral imager in real-time mode of operation
NASA Astrophysics Data System (ADS)
Hakim, P. R.; Permala, R.; Jayani, A. P. S.
2018-05-01
LAPAN-A3/IPB satellite was launched in June 2016 and its multispectral imager has been producing Indonesian coverage images. In order to improve its support for remote sensing application, the imager should produce images with high quality and quantity. To improve the quantity of LAPAN-A3/IPB multispectral image captured, image acquisition could be executed in real-time mode from LAPAN ground station in Bogor when the satellite passes west Indonesia region. This research analyses the performance of LAPAN-A3/IPB multispectral imager acquisition in real-time mode, in terms of image quality and quantity, under assumption of several on-board and ground segment limitations. Results show that with real-time operation mode, LAPAN-A3/IPB multispectral imager could produce twice as much as image coverage compare to recorded mode. However, the images produced in real-time mode will have slightly degraded quality due to image compression process involved. Based on several analyses that have been done in this research, it is recommended to use real-time acquisition mode whenever it possible, unless for some circumstances that strictly not allow any quality degradation of the images produced.
Multispectral Palmprint Recognition Using a Quaternion Matrix
Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng
2012-01-01
Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%. PMID:22666049
Multispectral palmprint recognition using a quaternion matrix.
Xu, Xingpeng; Guo, Zhenhua; Song, Changjiang; Li, Yafeng
2012-01-01
Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR) illuminations were represented by a quaternion matrix, then principal component analysis (PCA) and discrete wavelet transform (DWT) were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%.
Novel approach to multispectral image compression on the Internet
NASA Astrophysics Data System (ADS)
Zhu, Yanqiu; Jin, Jesse S.
2000-10-01
Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.
Fusion of multi-spectral and panchromatic images based on 2D-PWVD and SSIM
NASA Astrophysics Data System (ADS)
Tan, Dongjie; Liu, Yi; Hou, Ruonan; Xue, Bindang
2016-03-01
A combined method using 2D pseudo Wigner-Ville distribution (2D-PWVD) and structural similarity(SSIM) index is proposed for fusion of low resolution multi-spectral (MS) image and high resolution panchromatic (PAN) image. First, the intensity component of multi-spectral image is extracted with generalized IHS transform. Then, the spectrum diagrams of the intensity components of multi-spectral image and panchromatic image are obtained with 2D-PWVD. Different fusion rules are designed for different frequency information of the spectrum diagrams. SSIM index is used to evaluate the high frequency information of the spectrum diagrams for assigning the weights in the fusion processing adaptively. After the new spectrum diagram is achieved according to the fusion rule, the final fusion image can be obtained by inverse 2D-PWVD and inverse GIHS transform. Experimental results show that, the proposed method can obtain high quality fusion images.
A new multi-spectral feature level image fusion method for human interpretation
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-03-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
Fourier Spectral Filter Array for Optimal Multispectral Imaging.
Jia, Jie; Barnard, Kenneth J; Hirakawa, Keigo
2016-04-01
Limitations to existing multispectral imaging modalities include speed, cost, range, spatial resolution, and application-specific system designs that lack versatility of the hyperspectral imaging modalities. In this paper, we propose a novel general-purpose single-shot passive multispectral imaging modality. Central to this design is a new type of spectral filter array (SFA) based not on the notion of spatially multiplexing narrowband filters, but instead aimed at enabling single-shot Fourier transform spectroscopy. We refer to this new SFA pattern as Fourier SFA, and we prove that this design solves the problem of optimally sampling the hyperspectral image data.
Multi-spectral endogenous fluorescence imaging for bacterial differentiation
NASA Astrophysics Data System (ADS)
Chernomyrdin, Nikita V.; Babayants, Margarita V.; Korotkov, Oleg V.; Kudrin, Konstantin G.; Rimskaya, Elena N.; Shikunova, Irina A.; Kurlov, Vladimir N.; Cherkasova, Olga P.; Komandin, Gennady A.; Reshetov, Igor V.; Zaytsev, Kirill I.
2017-07-01
In this paper, the multi-spectral endogenous fluorescence imaging was implemented for bacterial differentiation. The fluorescence imaging was performed using a digital camera equipped with a set of visual bandpass filters. Narrowband 365 nm ultraviolet radiation passed through a beam homogenizer was used to excite the sample fluorescence. In order to increase a signal-to-noise ratio and suppress a non-fluorescence background in images, the intensity of the UV excitation was modulated using a mechanical chopper. The principal components were introduced for differentiating the samples of bacteria based on the multi-spectral endogenous fluorescence images.
Optimal wavelength band clustering for multispectral iris recognition.
Gong, Yazhuo; Zhang, David; Shi, Pengfei; Yan, Jingqi
2012-07-01
This work explores the possibility of clustering spectral wavelengths based on the maximum dissimilarity of iris textures. The eventual goal is to determine how many bands of spectral wavelengths will be enough for iris multispectral fusion and to find these bands that will provide higher performance of iris multispectral recognition. A multispectral acquisition system was first designed for imaging the iris at narrow spectral bands in the range of 420 to 940 nm. Next, a set of 60 human iris images that correspond to the right and left eyes of 30 different subjects were acquired for an analysis. Finally, we determined that 3 clusters were enough to represent the 10 feature bands of spectral wavelengths using the agglomerative clustering based on two-dimensional principal component analysis. The experimental results suggest (1) the number, center, and composition of clusters of spectral wavelengths and (2) the higher performance of iris multispectral recognition based on a three wavelengths-bands fusion.
Feng, Lei; Fang, Hui; Zhou, Wei-Jun; Huang, Min; He, Yong
2006-09-01
Site-specific variable nitrogen application is one of the major precision crop production management operations. Obtaining sufficient crop nitrogen stress information is essential for achieving effective site-specific nitrogen applications. The present paper describes the development of a multi-spectral nitrogen deficiency sensor, which uses three channels (green, red, near-infrared) of crop images to determine the nitrogen level of canola. This sensor assesses the nitrogen stress by means of estimated SPAD value of the canola based on canola canopy reflectance sensed using three channels (green, red, near-infrared) of the multi-spectral camera. The core of this investigation is the calibration methods between the multi-spectral references and the nitrogen levels in crops measured using a SPAD 502 chlorophyll meter. Based on the results obtained from this study, it can be concluded that a multi-spectral CCD camera can provide sufficient information to perform reasonable SPAD values estimation during field operations.
Detection of sudden death syndrome using a multispectral imaging sensor
USDA-ARS?s Scientific Manuscript database
Sudden death syndrome (SDS), caused by the fungus Fusarium solani f. sp. glycines, is a widespread mid- to late-season disease with distinctive foliar symptoms. This paper reported the development of an image analysis based method to detect SDS using a multispectral image sensor. A hue, saturation a...
Computational multispectral video imaging [Invited].
Wang, Peng; Menon, Rajesh
2018-01-01
Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.
The fusion of satellite and UAV data: simulation of high spatial resolution band
NASA Astrophysics Data System (ADS)
Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata
2017-10-01
Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.
Wachman, Elliot S; Geyer, Stanley J; Recht, Joel M; Ward, Jon; Zhang, Bill; Reed, Murray; Pannell, Chris
2014-05-01
An acousto-optic tunable filter (AOTF)-based multispectral imaging microscope system allows the combination of cellular morphology and multiple biomarker stainings on a single microscope slide. We describe advances in AOTF technology that have greatly improved spectral purity, field uniformity, and image quality. A multispectral imaging bright field microscope using these advances demonstrates pathology results that have great potential for clinical use.
USDA-ARS?s Scientific Manuscript database
This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...
A multispectral photon-counting double random phase encoding scheme for image authentication.
Yi, Faliu; Moon, Inkyu; Lee, Yeon H
2014-05-20
In this paper, we propose a new method for color image-based authentication that combines multispectral photon-counting imaging (MPCI) and double random phase encoding (DRPE) schemes. The sparsely distributed information from MPCI and the stationary white noise signal from DRPE make intruder attacks difficult. In this authentication method, the original multispectral RGB color image is down-sampled into a Bayer image. The three types of color samples (red, green and blue color) in the Bayer image are encrypted with DRPE and the amplitude part of the resulting image is photon counted. The corresponding phase information that has nonzero amplitude after photon counting is then kept for decryption. Experimental results show that the retrieved images from the proposed method do not visually resemble their original counterparts. Nevertheless, the original color image can be efficiently verified with statistical nonlinear correlations. Our experimental results also show that different interpolation algorithms applied to Bayer images result in different verification effects for multispectral RGB color images.
NASA Astrophysics Data System (ADS)
Matikainen, Leena; Karila, Kirsi; Hyyppä, Juha; Litkey, Paula; Puttonen, Eetu; Ahokas, Eero
2017-06-01
During the last 20 years, airborne laser scanning (ALS), often combined with passive multispectral information from aerial images, has shown its high feasibility for automated mapping processes. The main benefits have been achieved in the mapping of elevated objects such as buildings and trees. Recently, the first multispectral airborne laser scanners have been launched, and active multispectral information is for the first time available for 3D ALS point clouds from a single sensor. This article discusses the potential of this new technology in map updating, especially in automated object-based land cover classification and change detection in a suburban area. For our study, Optech Titan multispectral ALS data over a suburban area in Finland were acquired. Results from an object-based random forests analysis suggest that the multispectral ALS data are very useful for land cover classification, considering both elevated classes and ground-level classes. The overall accuracy of the land cover classification results with six classes was 96% compared with validation points. The classes under study included building, tree, asphalt, gravel, rocky area and low vegetation. Compared to classification of single-channel data, the main improvements were achieved for ground-level classes. According to feature importance analyses, multispectral intensity features based on several channels were more useful than those based on one channel. Automatic change detection for buildings and roads was also demonstrated by utilising the new multispectral ALS data in combination with old map vectors. In change detection of buildings, an old digital surface model (DSM) based on single-channel ALS data was also used. Overall, our analyses suggest that the new data have high potential for further increasing the automation level in mapping. Unlike passive aerial imaging commonly used in mapping, the multispectral ALS technology is independent of external illumination conditions, and there are no shadows on intensity images produced from the data. These are significant advantages in developing automated classification and change detection procedures.
Efficient single-pixel multispectral imaging via non-mechanical spatio-spectral modulation.
Li, Ziwei; Suo, Jinli; Hu, Xuemei; Deng, Chao; Fan, Jingtao; Dai, Qionghai
2017-01-27
Combining spectral imaging with compressive sensing (CS) enables efficient data acquisition by fully utilizing the intrinsic redundancies in natural images. Current compressive multispectral imagers, which are mostly based on array sensors (e.g, CCD or CMOS), suffer from limited spectral range and relatively low photon efficiency. To address these issues, this paper reports a multispectral imaging scheme with a single-pixel detector. Inspired by the spatial resolution redundancy of current spatial light modulators (SLMs) relative to the target reconstruction, we design an all-optical spectral splitting device to spatially split the light emitted from the object into several counterparts with different spectrums. Separated spectral channels are spatially modulated simultaneously with individual codes by an SLM. This no-moving-part modulation ensures a stable and fast system, and the spatial multiplexing ensures an efficient acquisition. A proof-of-concept setup is built and validated for 8-channel multispectral imaging within 420~720 nm wavelength range on both macro and micro objects, showing a potential for efficient multispectral imager in macroscopic and biomedical applications.
Atmospheric correction for remote sensing image based on multi-spectral information
NASA Astrophysics Data System (ADS)
Wang, Yu; He, Hongyan; Tan, Wei; Qi, Wenwen
2018-03-01
The light collected from remote sensors taken from space must transit through the Earth's atmosphere. All satellite images are affected at some level by lightwave scattering and absorption from aerosols, water vapor and particulates in the atmosphere. For generating high-quality scientific data, atmospheric correction is required to remove atmospheric effects and to convert digital number (DN) values to surface reflectance (SR). Every optical satellite in orbit observes the earth through the same atmosphere, but each satellite image is impacted differently because atmospheric conditions are constantly changing. A physics-based detailed radiative transfer model 6SV requires a lot of key ancillary information about the atmospheric conditions at the acquisition time. This paper investigates to achieve the simultaneous acquisition of atmospheric radiation parameters based on the multi-spectral information, in order to improve the estimates of surface reflectance through physics-based atmospheric correction. Ancillary information on the aerosol optical depth (AOD) and total water vapor (TWV) derived from the multi-spectral information based on specific spectral properties was used for the 6SV model. The experimentation was carried out on images of Sentinel-2, which carries a Multispectral Instrument (MSI), recording in 13 spectral bands, covering a wide range of wavelengths from 440 up to 2200 nm. The results suggest that per-pixel atmospheric correction through 6SV model, integrating AOD and TWV derived from multispectral information, is better suited for accurate analysis of satellite images and quantitative remote sensing application.
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-01-01
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-05-22
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.
D Land Cover Classification Based on Multispectral LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong
2016-06-01
Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.
USDA-ARS?s Scientific Manuscript database
Structured-illumination reflectance imaging (SIRI) is a new, promising imaging modality for enhancing quality detection of food. A liquid-crystal tunable filter (LCTF)-based multispectral SIRI system was developed and used for selecting optimal wavebands to detect bruising in apples. Immediately aft...
NASA Astrophysics Data System (ADS)
Wu, Yu; Zheng, Lijuan; Xie, Donghai; Zhong, Ruofei
2017-07-01
In this study, the extended morphological attribute profiles (EAPs) and independent component analysis (ICA) were combined for feature extraction of high-resolution multispectral satellite remote sensing images and the regularized least squares (RLS) approach with the radial basis function (RBF) kernel was further applied for the classification. Based on the major two independent components, the geometrical features were extracted using the EAPs method. In this study, three morphological attributes were calculated and extracted for each independent component, including area, standard deviation, and moment of inertia. The extracted geometrical features classified results using RLS approach and the commonly used LIB-SVM library of support vector machines method. The Worldview-3 and Chinese GF-2 multispectral images were tested, and the results showed that the features extracted by EAPs and ICA can effectively improve the accuracy of the high-resolution multispectral image classification, 2% larger than EAPs and principal component analysis (PCA) method, and 6% larger than APs and original high-resolution multispectral data. Moreover, it is also suggested that both the GURLS and LIB-SVM libraries are well suited for the multispectral remote sensing image classification. The GURLS library is easy to be used with automatic parameter selection but its computation time may be larger than the LIB-SVM library. This study would be helpful for the classification application of high-resolution multispectral satellite remote sensing images.
Wang, Guizhou; Liu, Jianbo; He, Guojin
2013-01-01
This paper presents a new classification method for high-spatial-resolution remote sensing images based on a strategic mechanism of spatial mapping and reclassification. The proposed method includes four steps. First, the multispectral image is classified by a traditional pixel-based classification method (support vector machine). Second, the panchromatic image is subdivided by watershed segmentation. Third, the pixel-based multispectral image classification result is mapped to the panchromatic segmentation result based on a spatial mapping mechanism and the area dominant principle. During the mapping process, an area proportion threshold is set, and the regional property is defined as unclassified if the maximum area proportion does not surpass the threshold. Finally, unclassified regions are reclassified based on spectral information using the minimum distance to mean algorithm. Experimental results show that the classification method for high-spatial-resolution remote sensing images based on the spatial mapping mechanism and reclassification strategy can make use of both panchromatic and multispectral information, integrate the pixel- and object-based classification methods, and improve classification accuracy. PMID:24453808
Development and bench testing of a multi-spectral imaging technology built on a smartphone platform
NASA Astrophysics Data System (ADS)
Bolton, Frank J.; Weiser, Reuven; Kass, Alex J.; Rose, Donny; Safir, Amit; Levitz, David
2016-03-01
Cervical cancer screening presents a great challenge for clinicians across the developing world. In many countries, cervical cancer screening is done by visualization with the naked eye. Simple brightfield white light imaging with photo documentation has been shown to make a significant impact on cervical cancer care. Adoption of smartphone based cervical imaging devices is increasing across Africa. However, advanced imaging technologies such as multispectral imaging systems, are seldom deployed in low resource settings, where they are needed most. To address this challenge, the optical system of a smartphone-based mobile colposcopy imaging system was refined, integrating components required for low cost, portable multi-spectral imaging of the cervix. This paper describes the refinement of the mobile colposcope to enable it to acquire images of the cervix at multiple illumination wavelengths, including modeling and laboratory testing. Wavelengths were selected to enable quantifying the main absorbers in tissue (oxyand deoxy-hemoglobin, and water), as well as scattering parameters that describe the size distribution of scatterers. The necessary hardware and software modifications are reviewed. Initial testing suggests the multi-spectral mobile device holds promise for use in low-resource settings.
Zhao, Yong-guang; Ma, Ling-ling; Li, Chuan-rong; Zhu, Xiao-hua; Tang, Ling-li
2015-07-01
Due to the lack of enough spectral bands for multi-spectral sensor, it is difficult to reconstruct surface retlectance spectrum from finite spectral information acquired by multi-spectral instrument. Here, taking into full account of the heterogeneity of pixel from remote sensing image, a method is proposed to simulate hyperspectral data from multispectral data based on canopy radiation transfer model. This method first assumes the mixed pixels contain two types of land cover, i.e., vegetation and soil. The sensitive parameters of Soil-Leaf-Canopy (SLC) model and a soil ratio factor were retrieved from multi-spectral data based on Look-Up Table (LUT) technology. Then, by combined with a soil ratio factor, all the parameters were input into the SLC model to simulate the surface reflectance spectrum from 400 to 2 400 nm. Taking Landsat Enhanced Thematic Mapper Plus (ETM+) image as reference image, the surface reflectance spectrum was simulated. The simulated reflectance spectrum revealed different feature information of different surface types. To test the performance of this method, the simulated reflectance spectrum was convolved with the Landsat ETM + spectral response curves and Moderate Resolution Imaging Spectrometer (MODIS) spectral response curves to obtain the simulated Landsat ETM+ and MODIS image. Finally, the simulated Landsat ETM+ and MODIS images were compared with the observed Landsat ETM+ and MODIS images. The results generally showed high correction coefficients (Landsat: 0.90-0.99, MODIS: 0.74-0.85) between most simulated bands and observed bands and indicated that the simulated reflectance spectrum was well simulated and reliable.
NASA Astrophysics Data System (ADS)
Kim, Manjae; Kim, Sewoong; Hwang, Minjoo; Kim, Jihun; Je, Minkyu; Jang, Jae Eun; Lee, Dong Hun; Hwang, Jae Youn
2017-02-01
To date, the incident rates of various skin diseases have increased due to hereditary and environmental factors including stress, irregular diet, pollution, etc. Among these skin diseases, seborrheic dermatitis and psoriasis are a chronic/relapsing dermatitis involving infection and temporary alopecia. However, they typically exhibit similar symptoms, thus resulting in difficulty in discrimination between them. To prevent their associated complications and appropriate treatments for them, it is crucial to discriminate between seborrheic dermatitis and psoriasis with high specificity and sensitivity and further continuously/quantitatively to monitor the skin lesions during their treatment at other locations besides a hospital. Thus, we here demonstrate a mobile multispectral imaging system connected to a smartphone for selfdiagnosis of seborrheic dermatitis and further discrimination between seborrheic dermatitis and psoriasis on the scalp, which is the more challenging case. Using the system developed, multispectral imaging and analysis of seborrheic dermatitis and psoriasis on the scalp was carried out. It was here found that the spectral signatures of seborrheic dermatitis and psoriasis were discernable and thus seborrheic dermatitis on the scalp could be distinguished from psoriasis by using the system. In particular, the smartphone-based multispectral imaging and analysis moreover offered better discrimination between seborrheic dermatitis and psoriasis than the RGB imaging and analysis. These results suggested that the multispectral imaging system based on a smartphone has the potential for self-diagnosis of seborrheic dermatitis with high portability and specificity.
A New Pansharpening Method Based on Spatial and Spectral Sparsity Priors.
He, Xiyan; Condat, Laurent; Bioucas-Diaz, Jose; Chanussot, Jocelyn; Xia, Junshi
2014-06-27
The development of multisensor systems in recent years has led to great increase in the amount of available remote sensing data. Image fusion techniques aim at inferring high quality images of a given area from degraded versions of the same area obtained by multiple sensors. This paper focuses on pansharpening, which is the inference of a high spatial resolution multispectral image from two degraded versions with complementary spectral and spatial resolution characteristics: a) a low spatial resolution multispectral image; and b) a high spatial resolution panchromatic image. We introduce a new variational model based on spatial and spectral sparsity priors for the fusion. In the spectral domain we encourage low-rank structure, whereas in the spatial domain we promote sparsity on the local differences. Given the fact that both panchromatic and multispectral images are integrations of the underlying continuous spectra using different channel responses, we propose to exploit appropriate regularizations based on both spatial and spectral links between panchromatic and the fused multispectral images. A weighted version of the vector Total Variation (TV) norm of the data matrix is employed to align the spatial information of the fused image with that of the panchromatic image. With regard to spectral information, two different types of regularization are proposed to promote a soft constraint on the linear dependence between the panchromatic and the fused multispectral images. The first one estimates directly the linear coefficients from the observed panchromatic and low resolution multispectral images by Linear Regression (LR) while the second one employs the Principal Component Pursuit (PCP) to obtain a robust recovery of the underlying low-rank structure. We also show that the two regularizers are strongly related. The basic idea of both regularizers is that the fused image should have low-rank and preserve edge locations. We use a variation of the recently proposed Split Augmented Lagrangian Shrinkage (SALSA) algorithm to effectively solve the proposed variational formulations. Experimental results on simulated and real remote sensing images show the effectiveness of the proposed pansharpening method compared to the state-of-the-art.
Automated road network extraction from high spatial resolution multi-spectral imagery
NASA Astrophysics Data System (ADS)
Zhang, Qiaoping
For the last three decades, the Geomatics Engineering and Computer Science communities have considered automated road network extraction from remotely-sensed imagery to be a challenging and important research topic. The main objective of this research is to investigate the theory and methodology of automated feature extraction for image-based road database creation, refinement or updating, and to develop a series of algorithms for road network extraction from high resolution multi-spectral imagery. The proposed framework for road network extraction from multi-spectral imagery begins with an image segmentation using the k-means algorithm. This step mainly concerns the exploitation of the spectral information for feature extraction. The road cluster is automatically identified using a fuzzy classifier based on a set of predefined road surface membership functions. These membership functions are established based on the general spectral signature of road pavement materials and the corresponding normalized digital numbers on each multi-spectral band. Shape descriptors of the Angular Texture Signature are defined and used to reduce the misclassifications between roads and other spectrally similar objects (e.g., crop fields, parking lots, and buildings). An iterative and localized Radon transform is developed for the extraction of road centerlines from the classified images. The purpose of the transform is to accurately and completely detect the road centerlines. It is able to find short, long, and even curvilinear lines. The input image is partitioned into a set of subset images called road component images. An iterative Radon transform is locally applied to each road component image. At each iteration, road centerline segments are detected based on an accurate estimation of the line parameters and line widths. Three localization approaches are implemented and compared using qualitative and quantitative methods. Finally, the road centerline segments are grouped into a road network. The extracted road network is evaluated against a reference dataset using a line segment matching algorithm. The entire process is unsupervised and fully automated. Based on extensive experimentation on a variety of remotely-sensed multi-spectral images, the proposed methodology achieves a moderate success in automating road network extraction from high spatial resolution multi-spectral imagery.
Estimating atmospheric parameters and reducing noise for multispectral imaging
Conger, James Lynn
2014-02-25
A method and system for estimating atmospheric radiance and transmittance. An atmospheric estimation system is divided into a first phase and a second phase. The first phase inputs an observed multispectral image and an initial estimate of the atmospheric radiance and transmittance for each spectral band and calculates the atmospheric radiance and transmittance for each spectral band, which can be used to generate a "corrected" multispectral image that is an estimate of the surface multispectral image. The second phase inputs the observed multispectral image and the surface multispectral image that was generated by the first phase and removes noise from the surface multispectral image by smoothing out change in average deviations of temperatures.
FRIT characterized hierarchical kernel memory arrangement for multiband palmprint recognition
NASA Astrophysics Data System (ADS)
Kisku, Dakshina R.; Gupta, Phalguni; Sing, Jamuna K.
2015-10-01
In this paper, we present a hierarchical kernel associative memory (H-KAM) based computational model with Finite Ridgelet Transform (FRIT) representation for multispectral palmprint recognition. To characterize a multispectral palmprint image, the Finite Ridgelet Transform is used to achieve a very compact and distinctive representation of linear singularities while it also captures the singularities along lines and edges. The proposed system makes use of Finite Ridgelet Transform to represent multispectral palmprint image and it is then modeled by Kernel Associative Memories. Finally, the recognition scheme is thoroughly tested with a benchmarking multispectral palmprint database CASIA. For recognition purpose a Bayesian classifier is used. The experimental results exhibit robustness of the proposed system under different wavelengths of palm image.
NASA Astrophysics Data System (ADS)
Nishidate, Izumi; Ooe, Shintaro; Todoroki, Shinsuke; Asamizu, Erika
2013-05-01
To evaluate the functional pigments in the tomato fruits nondestructively, we propose a method based on the multispectral diffuse reflectance images estimated by the Wiener estimation for a digital RGB image. Each pixel of the multispectral image is converted to the absorbance spectrum and then analyzed by the multiple regression analysis to visualize the contents of chlorophyll a, lycopene and β-carotene. The result confirms the feasibility of the method for in situ imaging of chlorophyll a, β-carotene and lycopene in the tomato fruits.
Multispectral Filter Arrays: Recent Advances and Practical Implementation
Lapray, Pierre-Jean; Wang, Xingbo; Thomas, Jean-Baptiste; Gouton, Pierre
2014-01-01
Thanks to some technical progress in interferencefilter design based on different technologies, we can finally successfully implement the concept of multispectral filter array-based sensors. This article provides the relevant state-of-the-art for multispectral imaging systems and presents the characteristics of the elements of our multispectral sensor as a case study. The spectral characteristics are based on two different spatial arrangements that distribute eight different bandpass filters in the visible and near-infrared area of the spectrum. We demonstrate that the system is viable and evaluate its performance through sensor spectral simulation. PMID:25407904
Combined use of LiDAR data and multispectral earth observation imagery for wetland habitat mapping
NASA Astrophysics Data System (ADS)
Rapinel, Sébastien; Hubert-Moy, Laurence; Clément, Bernard
2015-05-01
Although wetlands play a key role in controlling flooding and nonpoint source pollution, sequestering carbon and providing an abundance of ecological services, the inventory and characterization of wetland habitats are most often limited to small areas. This explains why the understanding of their ecological functioning is still insufficient for a reliable functional assessment on areas larger than a few hectares. While LiDAR data and multispectral Earth Observation (EO) images are often used separately to map wetland habitats, their combined use is currently being assessed for different habitat types. The aim of this study is to evaluate the combination of multispectral and multiseasonal imagery and LiDAR data to precisely map the distribution of wetland habitats. The image classification was performed combining an object-based approach and decision-tree modeling. Four multispectral images with high (SPOT-5) and very high spatial resolution (Quickbird, KOMPSAT-2, aerial photographs) were classified separately. Another classification was then applied integrating summer and winter multispectral image data and three layers derived from LiDAR data: vegetation height, microtopography and intensity return. The comparison of classification results shows that some habitats are better identified on the winter image and others on the summer image (overall accuracies = 58.5 and 57.6%). They also point out that classification accuracy is highly improved (overall accuracy = 86.5%) when combining LiDAR data and multispectral images. Moreover, this study highlights the advantage of integrating vegetation height, microtopography and intensity parameters in the classification process. This article demonstrates that information provided by the synergetic use of multispectral images and LiDAR data can help in wetland functional assessment
Image denoising and deblurring using multispectral data
NASA Astrophysics Data System (ADS)
Semenishchev, E. A.; Voronin, V. V.; Marchuk, V. I.
2017-05-01
Currently decision-making systems get widespread. These systems are based on the analysis video sequences and also additional data. They are volume, change size, the behavior of one or a group of objects, temperature gradient, the presence of local areas with strong differences, and others. Security and control system are main areas of application. A noise on the images strongly influences the subsequent processing and decision making. This paper considers the problem of primary signal processing for solving the tasks of image denoising and deblurring of multispectral data. The additional information from multispectral channels can improve the efficiency of object classification. In this paper we use method of combining information about the objects obtained by the cameras in different frequency bands. We apply method based on simultaneous minimization L2 and the first order square difference sequence of estimates to denoising and restoring the blur on the edges. In case of loss of the information will be applied an approach based on the interpolation of data taken from the analysis of objects located in other areas and information obtained from multispectral camera. The effectiveness of the proposed approach is shown in a set of test images.
High performance multi-spectral interrogation for surface plasmon resonance imaging sensors.
Sereda, A; Moreau, J; Canva, M; Maillart, E
2014-04-15
Surface plasmon resonance (SPR) sensing has proven to be a valuable tool in the field of surface interactions characterization, especially for biomedical applications where label-free techniques are of particular interest. In order to approach the theoretical resolution limit, most SPR-based systems have turned to either angular or spectral interrogation modes, which both offer very accurate real-time measurements, but at the expense of the 2-dimensional imaging capability, therefore decreasing the data throughput. In this article, we show numerically and experimentally how to combine the multi-spectral interrogation technique with 2D-imaging, while finding an optimum in terms of resolution, accuracy, acquisition speed and reduction in data dispersion with respect to the classical reflectivity interrogation mode. This multi-spectral interrogation methodology is based on a robust five parameter fitting of the spectral reflectivity curve which enables monitoring of the reflectivity spectral shift with a resolution of the order of ten picometers, and using only five wavelength measurements per point. In fine, such multi-spectral based plasmonic imaging system allows biomolecular interaction monitoring in a linear regime independently of variations of buffer optical index, which is illustrated on a DNA-DNA model case. © 2013 Elsevier B.V. All rights reserved.
Fluorescence multispectral imaging-based diagnostic system for atherosclerosis.
Ho, Cassandra Su Lyn; Horiuchi, Toshikatsu; Taniguchi, Hiroaki; Umetsu, Araya; Hagisawa, Kohsuke; Iwaya, Keiichi; Nakai, Kanji; Azmi, Amalina; Zulaziz, Natasha; Azhim, Azran; Shinomiya, Nariyoshi; Morimoto, Yuji
2016-08-20
Composition of atherosclerotic arterial walls is rich in lipids such as cholesterol, unlike normal arterial walls. In this study, we aimed to utilize this difference to diagnose atherosclerosis via multispectral fluorescence imaging, which allows for identification of fluorescence originating from the substance in the arterial wall. The inner surface of extracted arteries (rabbit abdominal aorta, human coronary artery) was illuminated by 405 nm excitation light and multispectral fluorescence images were obtained. Pathological examination of human coronary artery samples were carried out and thickness of arteries were calculated by measuring combined media and intima thickness. The fluorescence spectra in atherosclerotic sites were different from those in normal sites. Multiple regions of interest (ROI) were selected within each sample and a ratio between two fluorescence intensity differences (where each intensity difference is calculated between an identifier wavelength and a base wavelength) from each ROI was determined, allowing for discrimination of atherosclerotic sites. Fluorescence intensity and thickness of artery were found to be significantly correlated. These results indicate that multispectral fluorescence imaging provides qualitative and quantitative evaluations of atherosclerosis and is therefore a viable method of diagnosing the disease.
NASA Astrophysics Data System (ADS)
Klaessens, John H. G. M.; Nelisse, Martin; Verdaasdonk, Rudolf M.; Noordmans, Herke Jan
2013-03-01
Clinical interventions can cause changes in tissue perfusion, oxygenation or temperature. Real-time imaging of these phenomena could be useful for surgical strategy or understanding of physiological regulation mechanisms. Two noncontact imaging techniques were applied for imaging of large tissue areas: LED based multispectral imaging (MSI, 17 different wavelengths 370 nm-880 nm) and thermal imaging (7.5 to 13.5 μm). Oxygenation concentration changes were calculated using different analyzing methods. The advantages of these methods are presented for stationary and dynamic applications. Concentration calculations of chromophores in tissue require right choices of wavelengths The effects of different wavelength choices for hemoglobin concentration calculations were studied in laboratory conditions and consequently applied in clinical studies. Corrections for interferences during the clinical registrations (ambient light fluctuations, tissue movements) were performed. The wavelength dependency of the algorithms were studied and wavelength sets with the best results will be presented. The multispectral and thermal imaging systems were applied during clinical intervention studies: reperfusion of tissue flap transplantation (ENT), effectiveness of local anesthetic block and during open brain surgery in patients with epileptic seizures. The LED multispectral imaging system successfully imaged the perfusion and oxygenation changes during clinical interventions. The thermal images show local heat distributions over tissue areas as a result of changes in tissue perfusion. Multispectral imaging and thermal imaging provide complementary information and are promising techniques for real-time diagnostics of physiological processes in medicine.
Acousto-optic tunable filter chromatic aberration analysis and reduction with auto-focus system
NASA Astrophysics Data System (ADS)
Wang, Yaoli; Chen, Yuanyuan
2018-07-01
An acousto-optic tunable filter (AOTF) displays optical band broadening and sidelobes as a result of the coupling between the acoustic wave and optical waves of different wavelengths. These features were analysed by wave-vector phase matching between the optical and acoustic waves. A crossed-line test board was imaged by an AOTF multi-spectral imaging system, showing image blurring in the direction of diffraction and image sharpness in the orthogonal direction produced by the greater bandwidth and sidelobes in the former direction. Applying the secondary-imaging principle and considering the wavelength-dependent refractive index, focal length varies over the broad wavelength range. An automatic focusing method is therefore proposed for use in AOTF multi-spectral imaging systems. A new method for image-sharpness evaluation, based on improved Structure Similarity Index Measurement (SSIM), is also proposed, based on the characteristics of the AOTF imaging system. Compared with the traditional gradient operator, as same as it, the new evaluation function realized the evaluation between different image quality, thus could achieve the automatic focusing for different multispectral images.
NASA Technical Reports Server (NTRS)
Blonski, Slawomir; Glasser, Gerald; Russell, Jeffrey; Ryan, Robert; Terrie, Greg; Zanoni, Vicki
2003-01-01
Spectral band synthesis is a key step in the process of creating a simulated multispectral image from hyperspectral data. In this step, narrow hyperspectral bands are combined into broader multispectral bands. Such an approach has been used quite often, but to the best of our knowledge accuracy of the band synthesis simulations has not been evaluated thus far. Therefore, the main goal of this paper is to provide validation of the spectral band synthesis algorithm used in the ART software. The next section contains a description of the algorithm and an example of its application. Using spectral responses of AVIRIS, Hyperion, ALI, and ETM+, the following section shows how the synthesized spectral bands compare with actual bands, and it presents an evaluation of the simulation accuracy based on results of MODTRAN modeling. In the final sections of the paper, simulated images are compared with data acquired by actual satellite sensors. First, a Landsat 7 ETM+ image is simulated using an AVIRIS hyperspectral data cube. Then, two datasets collected with the Hyperion instrument from the EO-1 satellite are used to simulate multispectral images from the ALI and ETM+ sensors.
Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E; Moran, Emilio
2008-01-01
Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin.
Lu, Dengsheng; Batistella, Mateus; de Miranda, Evaristo E.; Moran, Emilio
2009-01-01
Complex forest structure and abundant tree species in the moist tropical regions often cause difficulties in classifying vegetation classes with remotely sensed data. This paper explores improvement in vegetation classification accuracies through a comparative study of different image combinations based on the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data, as well as the combination of spectral signatures and textures. A maximum likelihood classifier was used to classify the different image combinations into thematic maps. This research indicated that data fusion based on HRG multispectral and panchromatic data slightly improved vegetation classification accuracies: a 3.1 to 4.6 percent increase in the kappa coefficient compared with the classification results based on original HRG or TM multispectral images. A combination of HRG spectral signatures and two textural images improved the kappa coefficient by 6.3 percent compared with pure HRG multispectral images. The textural images based on entropy or second-moment texture measures with a window size of 9 pixels × 9 pixels played an important role in improving vegetation classification accuracy. Overall, optical remote-sensing data are still insufficient for accurate vegetation classifications in the Amazon basin. PMID:19789716
Quality evaluation of pansharpened hyperspectral images generated using multispectral images
NASA Astrophysics Data System (ADS)
Matsuoka, Masayuki; Yoshioka, Hiroki
2012-11-01
Hyperspectral remote sensing can provide a smooth spectral curve of a target by using a set of higher spectral resolution detectors. The spatial resolution of the hyperspectral images, however, is generally much lower than that of multispectral images due to the lower energy of incident radiation. Pansharpening is an image-fusion technique that generates higher spatial resolution multispectral images by combining lower resolution multispectral images with higher resolution panchromatic images. In this study, higher resolution hyperspectral images were generated by pansharpening of simulated lower hyperspectral and higher multispectral data. Spectral and spatial qualities of pansharpened images, then, were accessed in relation to the spectral bands of multispectral images. Airborne hyperspectral data of AVIRIS was used in this study, and it was pansharpened using six methods. Quantitative evaluations of pansharpened image are achieved using two frequently used indices, ERGAS, and the Q index.
Online quantitative analysis of multispectral images of human body tissues
NASA Astrophysics Data System (ADS)
Lisenko, S. A.
2013-08-01
A method is developed for online monitoring of structural and morphological parameters of biological tissues (haemoglobin concentration, degree of blood oxygenation, average diameter of capillaries and the parameter characterising the average size of tissue scatterers), which involves multispectral tissue imaging, image normalisation to one of its spectral layers and determination of unknown parameters based on their stable regression relation with the spectral characteristics of the normalised image. Regression is obtained by simulating numerically the diffuse reflectance spectrum of the tissue by the Monte Carlo method at a wide variation of model parameters. The correctness of the model calculations is confirmed by the good agreement with the experimental data. The error of the method is estimated under conditions of general variability of structural and morphological parameters of the tissue. The method developed is compared with the traditional methods of interpretation of multispectral images of biological tissues, based on the solution of the inverse problem for each pixel of the image in the approximation of different analytical models.
Black, Robert W.; Haggland, Alan; Crosby, Greg
2003-01-01
Instream hydraulic and riparian habitat conditions and stream temperatures were characterized for selected stream segments in the Upper White River Basin, Washington. An aerial multispectral imaging system used digital cameras to photograph the stream segments across multiple wavelengths to characterize fish habitat and temperature conditions. All imageries were georeferenced. Fish habitat features were photographed at a resolution of 0.5 meter and temperature imageries were photographed at a 1.0-meter resolution. The digital multispectral imageries were classified using commercially available software. Aerial photographs were taken on September 21, 1999. Field habitat data were collected from August 23 to October 12, 1999, to evaluate the measurement accuracy and effectiveness of the multispectral imaging in determining the extent of the instream habitat variables. Fish habitat types assessed by this method were the abundance of instream hydraulic features such as pool and riffle habitats, turbulent and non-turbulent habitats, riparian composition, the abundance of large woody debris in the stream and riparian zone, and stream temperatures. Factors such as the abundance of instream woody debris, the location and frequency of pools, and stream temperatures generally are known to have a significant impact on salmon. Instream woody debris creates the habitat complexity necessary to maintain a diverse and healthy salmon population. The abundance of pools is indicative of a stream's ability to support fish and other aquatic organisms. Changes in water temperature can affect aquatic organisms by altering metabolic rates and oxygen requirements, altering their sensitivity to toxic materials and affecting their ability to avoid predators. The specific objectives of this project were to evaluate the use of an aerial multispectral imaging system to accurately identify instream hydraulic features and surface-water temperatures in the Upper White River Basin, to use the multispectral system to help establish baseline instream/riparian habitat conditions in the study area, and to qualitatively assess the imaging system for possible use in other Puget Sound rivers. For the most part, all multispectral imagery-based estimates of total instream riffle and pool area were less than field measurements. The imagery-based estimates for riffle habitat area ranged from 35.5 to 83.3 percent less than field measurements. Pool habitat estimates ranged from 139.3 percent greater than field measurements to 94.0 percent less than field measurements. Multispectral imagery-based estimates of turbulent habitat conditions ranged from 9.3 percent greater than field measurements to 81.6 percent less than field measurements. Multispectral imagery-based estimates of non-turbulent habitat conditions ranged from 27.7 to 74.1 percent less than field measurements. The absolute average percentage of difference between field and imagery-based habitat type areas was less for the turbulent and non-turbulent habitat type categories than for pools and riffles. The estimate of woody debris by multispectral imaging was substantially different than field measurements; percentage of differences ranged from +373.1 to -100 percent. Although the total area of riffles, pools, and turbulent and non-turbulent habitat types measured in the field were all substantially higher than those estimated from the multispectral imagery, the percentage of composition of each habitat type was not substantially different between the imagery-based estimates and field measurements.
Quality assessment of butter cookies applying multispectral imaging
Andresen, Mette S; Dissing, Bjørn S; Løje, Hanne
2013-01-01
A method for characterization of butter cookie quality by assessing the surface browning and water content using multispectral images is presented. Based on evaluations of the browning of butter cookies, cookies were manually divided into groups. From this categorization, reference values were calculated for a statistical prediction model correlating multispectral images with a browning score. The browning score is calculated as a function of oven temperature and baking time. It is presented as a quadratic response surface. The investigated process window was the intervals 4–16 min and 160–200°C in a forced convection electrically heated oven. In addition to the browning score, a model for predicting the average water content based on the same images is presented. This shows how multispectral images of butter cookies may be used for the assessment of different quality parameters. Statistical analysis showed that the most significant wavelengths for browning predictions were in the interval 400–700 nm and the wavelengths significant for water prediction were primarily located in the near-infrared spectrum. The water prediction model was found to correctly estimate the average water content with an absolute error of 0.22%. From the images it was also possible to follow the browning and drying propagation from the cookie edge toward the center. PMID:24804036
Multi-spectral imaging with infrared sensitive organic light emitting diode
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-01-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions. PMID:25091589
Multi-spectral imaging with infrared sensitive organic light emitting diode
NASA Astrophysics Data System (ADS)
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-08-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions.
On-board multispectral classification study
NASA Technical Reports Server (NTRS)
Ewalt, D.
1979-01-01
The factors relating to onboard multispectral classification were investigated. The functions implemented in ground-based processing systems for current Earth observation sensors were reviewed. The Multispectral Scanner, Thematic Mapper, Return Beam Vidicon, and Heat Capacity Mapper were studied. The concept of classification was reviewed and extended from the ground-based image processing functions to an onboard system capable of multispectral classification. Eight different onboard configurations, each with varying amounts of ground-spacecraft interaction, were evaluated. Each configuration was evaluated in terms of turnaround time, onboard processing and storage requirements, geometric and classification accuracy, onboard complexity, and ancillary data required from the ground.
Kim, Min-Gab; Kim, Jin-Yong
2018-05-01
In this paper, we introduce a method to overcome the limitation of thickness measurement of a micro-patterned thin film. A spectroscopic imaging reflectometer system that consists of an acousto-optic tunable filter, a charge-coupled-device camera, and a high-magnitude objective lens was proposed, and a stack of multispectral images was generated. To secure improved accuracy and lateral resolution in the reconstruction of a two-dimensional thin film thickness, prior to the analysis of spectral reflectance profiles from each pixel of multispectral images, the image restoration based on an iterative deconvolution algorithm was applied to compensate for image degradation caused by blurring.
Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.
2010-01-01
Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475
Visual enhancement of unmixed multispectral imagery using adaptive smoothing
Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.
2004-01-01
Adaptive smoothing (AS) has been previously proposed as a method to smooth uniform regions of an image, retain contrast edges, and enhance edge boundaries. The method is an implementation of the anisotropic diffusion process which results in a gray scale image. This paper discusses modifications to the AS method for application to multi-band data which results in a color segmented image. The process was used to visually enhance the three most distinct abundance fraction images produced by the Lagrange constraint neural network learning-based unmixing of Landsat 7 Enhanced Thematic Mapper Plus multispectral sensor data. A mutual information-based method was applied to select the three most distinct fraction images for subsequent visualization as a red, green, and blue composite. A reported image restoration technique (partial restoration) was applied to the multispectral data to reduce unmixing error, although evaluation of the performance of this technique was beyond the scope of this paper. The modified smoothing process resulted in a color segmented image with homogeneous regions separated by sharpened, coregistered multiband edges. There was improved class separation with the segmented image, which has importance to subsequent operations involving data classification.
Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes.
Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M
2018-04-12
Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods.
Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes
Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M.
2018-01-01
Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods. PMID:29649114
Multispectral image enhancement for H&E stained pathological tissue specimens
NASA Astrophysics Data System (ADS)
Bautista, Pinky A.; Abe, Tokiya; Yamaguchi, Masahiro; Ohyama, Nagaaki; Yagi, Yukako
2008-03-01
The presence of a liver disease such as cirrhosis can be determined by examining the proliferation of collagen fiber from a tissue slide stained with special stain such as the Masson's trichrome(MT) stain. Collagen fiber and smooth muscle, which are both stained the same in an H&E stained slide, are stained blue and pink respectively in an MT-stained slide. In this paper we show that with multispectral imaging the difference between collagen fiber and smooth muscle can be visualized even from an H&E stained image. In the method M KL bases are derived using the spectral data of those H&E stained tissue components which can be easily differentiated from each other, i.e. nucleus, cytoplasm, red blood cells, etc. and based on the spectral residual error of fiber weighting factors are determined to enhance spectral features at certain wavelengths. Results of our experiment demonstrate the capability of multispectral imaging and its advantage compared to the conventional RGB imaging systems to delineate tissue structures with subtle colorimetric difference.
NASA Astrophysics Data System (ADS)
Volkov, Boris; Mathews, Marlon S.; Abookasis, David
2015-03-01
Multispectral imaging has received significant attention over the last decade as it integrates spectroscopy, imaging, tomography analysis concurrently to acquire both spatial and spectral information from biological tissue. In the present study, a multispectral setup based on projection of structured illumination at several near-infrared wavelengths and at different spatial frequencies is applied to quantitatively assess brain function before, during, and after the onset of traumatic brain injury in an intact mouse brain (n=5). For the production of head injury, we used the weight drop method where weight of a cylindrical metallic rod falling along a metal tube strikes the mouse's head. Structured light was projected onto the scalp surface and diffuse reflected light was recorded by a CCD camera positioned perpendicular to the mouse head. Following data analysis, we were able to concurrently show a series of hemodynamic and morphologic changes over time including higher deoxyhemoglobin, reduction in oxygen saturation, cell swelling, etc., in comparison with baseline measurements. Overall, results demonstrates the capability of multispectral imaging based structured illumination to detect and map of brain tissue optical and physiological properties following brain injury in a simple noninvasive and noncontact manner.
Gimbaled multispectral imaging system and method
Brown, Kevin H.; Crollett, Seferino; Henson, Tammy D.; Napier, Matthew; Stromberg, Peter G.
2016-01-26
A gimbaled multispectral imaging system and method is described herein. In an general embodiment, the gimbaled multispectral imaging system has a cross support that defines a first gimbal axis and a second gimbal axis, wherein the cross support is rotatable about the first gimbal axis. The gimbaled multispectral imaging system comprises a telescope that fixed to an upper end of the cross support, such that rotation of the cross support about the first gimbal axis causes the tilt of the telescope to alter. The gimbaled multispectral imaging system includes optics that facilitate on-gimbal detection of visible light and off-gimbal detection of infrared light.
Núñez, Jorge I; Farmer, Jack D; Sellar, R Glenn; Swayze, Gregg A; Blaney, Diana L
2014-02-01
Future astrobiological missions to Mars are likely to emphasize the use of rovers with in situ petrologic capabilities for selecting the best samples at a site for in situ analysis with onboard lab instruments or for caching for potential return to Earth. Such observations are central to an understanding of the potential for past habitable conditions at a site and for identifying samples most likely to harbor fossil biosignatures. The Multispectral Microscopic Imager (MMI) provides multispectral reflectance images of geological samples at the microscale, where each image pixel is composed of a visible/shortwave infrared spectrum ranging from 0.46 to 1.73 μm. This spectral range enables the discrimination of a wide variety of rock-forming minerals, especially Fe-bearing phases, and the detection of hydrated minerals. The MMI advances beyond the capabilities of current microimagers on Mars by extending the spectral range into the infrared and increasing the number of spectral bands. The design employs multispectral light-emitting diodes and an uncooled indium gallium arsenide focal plane array to achieve a very low mass and high reliability. To better understand and demonstrate the capabilities of the MMI for future surface missions to Mars, we analyzed samples from Mars-relevant analog environments with the MMI. Results indicate that the MMI images faithfully resolve the fine-scale microtextural features of samples and provide important information to help constrain mineral composition. The use of spectral endmember mapping reveals the distribution of Fe-bearing minerals (including silicates and oxides) with high fidelity, along with the presence of hydrated minerals. MMI-based petrogenetic interpretations compare favorably with laboratory-based analyses, revealing the value of the MMI for future in situ rover-mediated astrobiological exploration of Mars. Mars-Microscopic imager-Multispectral imaging-Spectroscopy-Habitability-Arm instrument.
Classification of human carcinoma cells using multispectral imagery
NASA Astrophysics Data System (ADS)
Ćinar, Umut; Y. Ćetin, Yasemin; Ćetin-Atalay, Rengul; Ćetin, Enis
2016-03-01
In this paper, we present a technique for automatically classifying human carcinoma cell images using textural features. An image dataset containing microscopy biopsy images from different patients for 14 distinct cancer cell line type is studied. The images are captured using a RGB camera attached to an inverted microscopy device. Texture based Gabor features are extracted from multispectral input images. SVM classifier is used to generate a descriptive model for the purpose of cell line classification. The experimental results depict satisfactory performance, and the proposed method is versatile for various microscopy magnification options.
Target Detection over the Diurnal Cycle Using a Multispectral Infrared Sensor.
Zhao, Huijie; Ji, Zheng; Li, Na; Gu, Jianrong; Li, Yansong
2016-12-29
When detecting a target over the diurnal cycle, a conventional infrared thermal sensor might lose the target due to the thermal crossover, which could happen at any time throughout the day when the infrared image contrast between target and background in a scene is indistinguishable due to the temperature variation. In this paper, the benefits of using a multispectral-based infrared sensor over the diurnal cycle have been shown. Firstly, a brief theoretical analysis on how the thermal crossover influences a conventional thermal sensor, within the conditions where the thermal crossover would happen and why the mid-infrared (3~5 μm) multispectral technology is effective, is presented. Furthermore, the effectiveness of this technology is also described and we describe how the prototype design and multispectral technology is employed to help solve the thermal crossover detection problem. Thirdly, several targets are set up outside and imaged in the field experiment over a 24-h period. The experimental results show that the multispectral infrared imaging system can enhance the contrast of the detected images and effectively solve the failure of the conventional infrared sensor during the diurnal cycle, which is of great significance for infrared surveillance applications.
Target Detection over the Diurnal Cycle Using a Multispectral Infrared Sensor
Zhao, Huijie; Ji, Zheng; Li, Na; Gu, Jianrong; Li, Yansong
2016-01-01
When detecting a target over the diurnal cycle, a conventional infrared thermal sensor might lose the target due to the thermal crossover, which could happen at any time throughout the day when the infrared image contrast between target and background in a scene is indistinguishable due to the temperature variation. In this paper, the benefits of using a multispectral-based infrared sensor over the diurnal cycle have been shown. Firstly, a brief theoretical analysis on how the thermal crossover influences a conventional thermal sensor, within the conditions where the thermal crossover would happen and why the mid-infrared (3~5 μm) multispectral technology is effective, is presented. Furthermore, the effectiveness of this technology is also described and we describe how the prototype design and multispectral technology is employed to help solve the thermal crossover detection problem. Thirdly, several targets are set up outside and imaged in the field experiment over a 24-h period. The experimental results show that the multispectral infrared imaging system can enhance the contrast of the detected images and effectively solve the failure of the conventional infrared sensor during the diurnal cycle, which is of great significance for infrared surveillance applications. PMID:28036073
Liu, Changhong; Liu, Wei; Lu, Xuzhong; Ma, Fei; Chen, Wei; Yang, Jianbo; Zheng, Lei
2014-01-01
Multispectral imaging with 19 wavelengths in the range of 405-970 nm has been evaluated for nondestructive determination of firmness, total soluble solids (TSS) content and ripeness stage in strawberry fruit. Several analysis approaches, including partial least squares (PLS), support vector machine (SVM) and back propagation neural network (BPNN), were applied to develop theoretical models for predicting the firmness and TSS of intact strawberry fruit. Compared with PLS and SVM, BPNN considerably improved the performance of multispectral imaging for predicting firmness and total soluble solids content with the correlation coefficient (r) of 0.94 and 0.83, SEP of 0.375 and 0.573, and bias of 0.035 and 0.056, respectively. Subsequently, the ability of multispectral imaging technology to classify fruit based on ripeness stage was tested using SVM and principal component analysis-back propagation neural network (PCA-BPNN) models. The higher classification accuracy of 100% was achieved using SVM model. Moreover, the results of all these models demonstrated that the VIS parts of the spectra were the main contributor to the determination of firmness, TSS content estimation and classification of ripeness stage in strawberry fruit. These results suggest that multispectral imaging, together with suitable analysis model, is a promising technology for rapid estimation of quality attributes and classification of ripeness stage in strawberry fruit.
Exploiting physical constraints for multi-spectral exo-planet detection
NASA Astrophysics Data System (ADS)
Thiébaut, Éric; Devaney, Nicholas; Langlois, Maud; Hanley, Kenneth
2016-07-01
We derive a physical model of the on-axis PSF for a high contrast imaging system such as GPI or SPHERE. This model is based on a multi-spectral Taylor series expansion of the diffraction pattern and predicts that the speckles should be a combination of spatial modes with deterministic chromatic magnification and weighting. We propose to remove most of the residuals by fitting this model on a set of images at multiple wavelengths and times. On simulated data, we demonstrate that our approach achieves very good speckle suppression without additional heuristic parameters. The residual speckles1, 2 set the most serious limitation in the detection of exo-planets in high contrast coronographic images provided by instruments such as SPHERE3 at the VLT, GPI4, 5 at Gemini, or SCExAO6 at Subaru. A number of post-processing methods have been proposed to remove as much as possible of the residual speckles while preserving the signal from the planets. These methods exploit the fact that the speckles and the planetary signal have different temporal and spectral behaviors. Some methods like LOCI7 are based on angular differential imaging8 (ADI), spectral differential imaging9, 10 (SDI), or on a combination of ADI and SDI.11 Instead of working on image differences, we propose to tackle the exo-planet detection as an inverse problem where a model of the residual speckles is fit on the set of multi-spectral images and, possibly, multiple exposures. In order to reduce the number of degrees of freedom, we impose specific constraints on the spatio-spectral distribution of stellar speckles. These constraints are deduced from a multi-spectral Taylor series expansion of the diffraction pattern for an on-axis source which implies that the speckles are a combination of spatial modes with deterministic chromatic magnification and weighting. Using simulated data, the efficiency of speckle removal by fitting the proposed multi-spectral model is compared to the result of using an approximation based on the singular value decomposition of the rescaled images. We show how the difficult problem to fitting a bilinear model on the can be solved in practise. The results are promising for further developments including application to real data and joint planet detection in multi-variate data (multi-spectral and multiple exposures images).
Spectrum slicer for snapshot spectral imaging
NASA Astrophysics Data System (ADS)
Tamamitsu, Miu; Kitagawa, Yutaro; Nakagawa, Keiichi; Horisaki, Ryoichi; Oishi, Yu; Morita, Shin-ya; Yamagata, Yutaka; Motohara, Kentaro; Goda, Keisuke
2015-12-01
We propose and demonstrate an optical component that overcomes critical limitations in our previously demonstrated high-speed multispectral videography-a method in which an array of periscopes placed in a prism-based spectral shaper is used to achieve snapshot multispectral imaging with the frame rate only limited by that of an image-recording sensor. The demonstrated optical component consists of a slicing mirror incorporated into a 4f-relaying lens system that we refer to as a spectrum slicer (SS). With its simple design, we can easily increase the number of spectral channels without adding fabrication complexity while preserving the capability of high-speed multispectral videography. We present a theoretical framework for the SS and its experimental utility to spectral imaging by showing real-time monitoring of a dynamic colorful event through five different visible windows.
Using spectral imaging for the analysis of abnormalities for colorectal cancer: When is it helpful?
Awan, Ruqayya; Al-Maadeed, Somaya; Al-Saady, Rafif
2018-01-01
The spectral imaging technique has been shown to provide more discriminative information than the RGB images and has been proposed for a range of problems. There are many studies demonstrating its potential for the analysis of histopathology images for abnormality detection but there have been discrepancies among previous studies as well. Many multispectral based methods have been proposed for histopathology images but the significance of the use of whole multispectral cube versus a subset of bands or a single band is still arguable. We performed comprehensive analysis using individual bands and different subsets of bands to determine the effectiveness of spectral information for determining the anomaly in colorectal images. Our multispectral colorectal dataset consists of four classes, each represented by infra-red spectrum bands in addition to the visual spectrum bands. We performed our analysis of spectral imaging by stratifying the abnormalities using both spatial and spectral information. For our experiments, we used a combination of texture descriptors with an ensemble classification approach that performed best on our dataset. We applied our method to another dataset and got comparable results with those obtained using the state-of-the-art method and convolutional neural network based method. Moreover, we explored the relationship of the number of bands with the problem complexity and found that higher number of bands is required for a complex task to achieve improved performance. Our results demonstrate a synergy between infra-red and visual spectrum by improving the classification accuracy (by 6%) on incorporating the infra-red representation. We also highlight the importance of how the dataset should be divided into training and testing set for evaluating the histopathology image-based approaches, which has not been considered in previous studies on multispectral histopathology images.
Using spectral imaging for the analysis of abnormalities for colorectal cancer: When is it helpful?
Al-Maadeed, Somaya; Al-Saady, Rafif
2018-01-01
The spectral imaging technique has been shown to provide more discriminative information than the RGB images and has been proposed for a range of problems. There are many studies demonstrating its potential for the analysis of histopathology images for abnormality detection but there have been discrepancies among previous studies as well. Many multispectral based methods have been proposed for histopathology images but the significance of the use of whole multispectral cube versus a subset of bands or a single band is still arguable. We performed comprehensive analysis using individual bands and different subsets of bands to determine the effectiveness of spectral information for determining the anomaly in colorectal images. Our multispectral colorectal dataset consists of four classes, each represented by infra-red spectrum bands in addition to the visual spectrum bands. We performed our analysis of spectral imaging by stratifying the abnormalities using both spatial and spectral information. For our experiments, we used a combination of texture descriptors with an ensemble classification approach that performed best on our dataset. We applied our method to another dataset and got comparable results with those obtained using the state-of-the-art method and convolutional neural network based method. Moreover, we explored the relationship of the number of bands with the problem complexity and found that higher number of bands is required for a complex task to achieve improved performance. Our results demonstrate a synergy between infra-red and visual spectrum by improving the classification accuracy (by 6%) on incorporating the infra-red representation. We also highlight the importance of how the dataset should be divided into training and testing set for evaluating the histopathology image-based approaches, which has not been considered in previous studies on multispectral histopathology images. PMID:29874262
Novel instrumentation of multispectral imaging technology for detecting tissue abnormity
NASA Astrophysics Data System (ADS)
Yi, Dingrong; Kong, Linghua
2012-10-01
Multispectral imaging is becoming a powerful tool in a wide range of biological and clinical studies by adding spectral, spatial and temporal dimensions to visualize tissue abnormity and the underlying biological processes. A conventional spectral imaging system includes two physically separated major components: a band-passing selection device (such as liquid crystal tunable filter and diffraction grating) and a scientific-grade monochromatic camera, and is expensive and bulky. Recently micro-arrayed narrow-band optical mosaic filter was invented and successfully fabricated to reduce the size and cost of multispectral imaging devices in order to meet the clinical requirement for medical diagnostic imaging applications. However the challenging issue of how to integrate and place the micro filter mosaic chip to the targeting focal plane, i.e., the imaging sensor, of an off-shelf CMOS/CCD camera is not reported anywhere. This paper presents the methods and results of integrating such a miniaturized filter with off-shelf CMOS imaging sensors to produce handheld real-time multispectral imaging devices for the application of early stage pressure ulcer (ESPU) detection. Unlike conventional multispectral imaging devices which are bulky and expensive, the resulting handheld real-time multispectral ESPU detector can produce multiple images at different center wavelengths with a single shot, therefore eliminates the image registration procedure required by traditional multispectral imaging technologies.
Enhancement of multispectral thermal infrared images - Decorrelation contrast stretching
NASA Technical Reports Server (NTRS)
Gillespie, Alan R.
1992-01-01
Decorrelation contrast stretching is an effective method for displaying information from multispectral thermal infrared (TIR) images. The technique involves transformation of the data to principle components ('decorrelation'), independent contrast 'stretching' of data from the new 'decorrelated' image bands, and retransformation of the stretched data back to the approximate original axes, based on the inverse of the principle component rotation. The enhancement is robust in that colors of the same scene components are similar in enhanced images of similar scenes, or the same scene imaged at different times. Decorrelation contrast stretching is reviewed in the context of other enhancements applied to TIR images.
Vision communications based on LED array and imaging sensor
NASA Astrophysics Data System (ADS)
Yoo, Jong-Ho; Jung, Sung-Yoon
2012-11-01
In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.
2016-10-10
AFRL-RX-WP-JA-2017-0189 EXPERIMENTAL DEMONSTRATION OF ADAPTIVE INFRARED MULTISPECTRAL IMAGING USING PLASMONIC FILTER ARRAY...March 2016 – 23 May 2016 4. TITLE AND SUBTITLE EXPERIMENTAL DEMONSTRATION OF ADAPTIVE INFRARED MULTISPECTRAL IMAGING USING PLASMONIC FILTER ARRAY...experimental demonstration of adaptive multispectral imagery using fabricated plasmonic spectral filter arrays and proposed target detection scenarios
Multispectral imaging with vertical silicon nanowires
Park, Hyunsung; Crozier, Kenneth B.
2013-01-01
Multispectral imaging is a powerful tool that extends the capabilities of the human eye. However, multispectral imaging systems generally are expensive and bulky, and multiple exposures are needed. Here, we report the demonstration of a compact multispectral imaging system that uses vertical silicon nanowires to realize a filter array. Multiple filter functions covering visible to near-infrared (NIR) wavelengths are simultaneously defined in a single lithography step using a single material (silicon). Nanowires are then etched and embedded into polydimethylsiloxane (PDMS), thereby realizing a device with eight filter functions. By attaching it to a monochrome silicon image sensor, we successfully realize an all-silicon multispectral imaging system. We demonstrate visible and NIR imaging. We show that the latter is highly sensitive to vegetation and furthermore enables imaging through objects opaque to the eye. PMID:23955156
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-03-02
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-01-01
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703
Implementation of Multispectral Image Classification on a Remote Adaptive Computer
NASA Technical Reports Server (NTRS)
Figueiredo, Marco A.; Gloster, Clay S.; Stephens, Mark; Graves, Corey A.; Nakkar, Mouna
1999-01-01
As the demand for higher performance computers for the processing of remote sensing science algorithms increases, the need to investigate new computing paradigms its justified. Field Programmable Gate Arrays enable the implementation of algorithms at the hardware gate level, leading to orders of m a,gnitude performance increase over microprocessor based systems. The automatic classification of spaceborne multispectral images is an example of a computation intensive application, that, can benefit from implementation on an FPGA - based custom computing machine (adaptive or reconfigurable computer). A probabilistic neural network is used here to classify pixels of of a multispectral LANDSAT-2 image. The implementation described utilizes Java client/server application programs to access the adaptive computer from a remote site. Results verify that a remote hardware version of the algorithm (implemented on an adaptive computer) is significantly faster than a local software version of the same algorithm implemented on a typical general - purpose computer).
NASA Astrophysics Data System (ADS)
Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling
2018-02-01
This paper proposes an optimized lighting method of applying a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting method is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square method is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed method were both significantly improved. The optimum method opens up avenues for the hyperspectral imaging of biological tissue.
Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling
2018-02-01
This paper proposes an optimized lighting method of applying a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting method is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square method is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed method were both significantly improved. The optimum method opens up avenues for the hyperspectral imaging of biological tissue.
NASA Astrophysics Data System (ADS)
Behrooz, Ali; Vasquez, Kristine O.; Waterman, Peter; Meganck, Jeff; Peterson, Jeffrey D.; Miller, Peter; Kempner, Joshua
2017-02-01
Intraoperative resection of tumors currently relies upon the surgeon's ability to visually locate and palpate tumor nodules. Undetected residual malignant tissue often results in the need for additional treatment or surgical intervention. The Solaris platform is a multispectral open-air fluorescence imaging system designed for translational fluorescence-guided surgery. Solaris supports video-rate imaging in four fixed fluorescence channels ranging from visible to near infrared, and a multispectral channel equipped with a liquid crystal tunable filter (LCTF) for multispectral image acquisition (520-620 nm). Identification of tumor margins using reagents emitting in the visible spectrum (400-650 nm), such as fluorescein isothiocyanate (FITC), present challenges considering the presence of auto-fluorescence from tissue and food in the gastrointestinal (GI) tract. To overcome this, Solaris acquires LCTF-based multispectral images, and by applying an automated spectral unmixing algorithm to the data, separates reagent fluorescence from tissue and food auto-fluorescence. The unmixing algorithm uses vertex component analysis to automatically extract the primary pure spectra, and resolves the reagent fluorescent signal using non-negative least squares. For validation, intraoperative in vivo studies were carried out in tumor-bearing rodents injected with FITC-dextran reagent that is primarily residing in malignant tissue 24 hours post injection. In the absence of unmixing, fluorescence from tumors is not distinguishable from that of surrounding tissue. Upon spectral unmixing, the FITC-labeled malignant regions become well defined and detectable. The results of these studies substantiate the multispectral power of Solaris in resolving FITC-based agent signal in deep tumor masses, under ambient and surgical light, and enhancing the ability to surgically resect them.
Davis, Philip A.; Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the South Bamyan mineral district, which has areas with a spectral reflectance anomaly that require field investigation. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008),but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for South Bamyan) and the WGS84 datum. The final image mosaics for the South Bamyan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Multispectral multisensor image fusion using wavelet transforms
Lemeshewsky, George P.
1999-01-01
Fusion techniques can be applied to multispectral and higher spatial resolution panchromatic images to create a composite image that is easier to interpret than the individual images. Wavelet transform-based multisensor, multiresolution fusion (a type of band sharpening) was applied to Landsat thematic mapper (TM) multispectral and coregistered higher resolution SPOT panchromatic images. The objective was to obtain increased spatial resolution, false color composite products to support the interpretation of land cover types wherein the spectral characteristics of the imagery are preserved to provide the spectral clues needed for interpretation. Since the fusion process should not introduce artifacts, a shift invariant implementation of the discrete wavelet transform (SIDWT) was used. These results were compared with those using the shift variant, discrete wavelet transform (DWT). Overall, the process includes a hue, saturation, and value color space transform to minimize color changes, and a reported point-wise maximum selection rule to combine transform coefficients. The performance of fusion based on the SIDWT and DWT was evaluated with a simulated TM 30-m spatial resolution test image and a higher resolution reference. Simulated imagery was made by blurring higher resolution color-infrared photography with the TM sensors' point spread function. The SIDWT based technique produced imagery with fewer artifacts and lower error between fused images and the full resolution reference. Image examples with TM and SPOT 10-m panchromatic illustrate the reduction in artifacts due to the SIDWT based fusion.
Development of a multispectral imagery device devoted to weed detection
NASA Astrophysics Data System (ADS)
Vioix, Jean-Baptiste; Douzals, Jean-Paul; Truchetet, Frederic; Navar, Pierre
2003-04-01
Multispectral imagery is a large domain with number of practical applications: thermography, quality control in industry, food science and agronomy, etc. The main interest is to obtain spectral information of the objects for which reflectance signal can be associated with physical, chemical and/or biological properties. Agronomic applications of multispectral imagery generally involve the acquisition of several images in the wavelengths of visible and near infrared. This paper will first present different kind of multispectral devices used for agronomic issues and will secondly introduce an original multispectral design based on a single CCD. Third, early results obtained for weed detection are presented.
Terrain type recognition using ERTS-1 MSS images
NASA Technical Reports Server (NTRS)
Gramenopoulos, N.
1973-01-01
For the automatic recognition of earth resources from ERTS-1 digital tapes, both multispectral and spatial pattern recognition techniques are important. Recognition of terrain types is based on spatial signatures that become evident by processing small portions of an image through selected algorithms. An investigation of spatial signatures that are applicable to ERTS-1 MSS images is described. Artifacts in the spatial signatures seem to be related to the multispectral scanner. A method for suppressing such artifacts is presented. Finally, results of terrain type recognition for one ERTS-1 image are presented.
Djiongo Kenfack, Cedrigue Boris; Monga, Olivier; Mpong, Serge Moto; Ndoundam, René
2018-03-01
Within the last decade, several approaches using quaternion numbers to handle and model multiband images in a holistic manner were introduced. The quaternion Fourier transform can be efficiently used to model texture in multidimensional data such as color images. For practical application, multispectral satellite data appear as a primary source for measuring past trends and monitoring changes in forest carbon stocks. In this work, we propose a texture-color descriptor based on the quaternion Fourier transform to extract relevant information from multiband satellite images. We propose a new multiband image texture model extraction, called FOTO++, in order to address biomass estimation issues. The first stage consists in removing noise from the multispectral data while preserving the edges of canopies. Afterward, color texture descriptors are extracted thanks to a discrete form of the quaternion Fourier transform, and finally the support vector regression method is used to deduce biomass estimation from texture indices. Our texture features are modeled using a vector composed with the radial spectrum coming from the amplitude of the quaternion Fourier transform. We conduct several experiments in order to study the sensitivity of our model to acquisition parameters. We also assess its performance both on synthetic images and on real multispectral images of Cameroonian forest. The results show that our model is more robust to acquisition parameters than the classical Fourier Texture Ordination model (FOTO). Our scheme is also more accurate for aboveground biomass estimation. We stress that a similar methodology could be implemented using quaternion wavelets. These results highlight the potential of the quaternion-based approach to study multispectral satellite images.
Reproducible high-resolution multispectral image acquisition in dermatology
NASA Astrophysics Data System (ADS)
Duliu, Alexandru; Gardiazabal, José; Lasser, Tobias; Navab, Nassir
2015-07-01
Multispectral image acquisitions are increasingly popular in dermatology, due to their improved spectral resolution which enables better tissue discrimination. Most applications however focus on restricted regions of interest, imaging only small lesions. In this work we present and discuss an imaging framework for high-resolution multispectral imaging on large regions of interest.
Classification by Using Multispectral Point Cloud Data
NASA Astrophysics Data System (ADS)
Liao, C. T.; Huang, H. H.
2012-07-01
Remote sensing images are generally recorded in two-dimensional format containing multispectral information. Also, the semantic information is clearly visualized, which ground features can be better recognized and classified via supervised or unsupervised classification methods easily. Nevertheless, the shortcomings of multispectral images are highly depending on light conditions, and classification results lack of three-dimensional semantic information. On the other hand, LiDAR has become a main technology for acquiring high accuracy point cloud data. The advantages of LiDAR are high data acquisition rate, independent of light conditions and can directly produce three-dimensional coordinates. However, comparing with multispectral images, the disadvantage is multispectral information shortage, which remains a challenge in ground feature classification through massive point cloud data. Consequently, by combining the advantages of both LiDAR and multispectral images, point cloud data with three-dimensional coordinates and multispectral information can produce a integrate solution for point cloud classification. Therefore, this research acquires visible light and near infrared images, via close range photogrammetry, by matching images automatically through free online service for multispectral point cloud generation. Then, one can use three-dimensional affine coordinate transformation to compare the data increment. At last, the given threshold of height and color information is set as threshold in classification.
NASA Astrophysics Data System (ADS)
Corcel, Mathias; Devaux, Marie-Françoise; Guillon, Fabienne; Barron, Cécile
2017-06-01
Powders produced from plant materials are heterogeneous in relation to native plant heterogeneity, and during grinding, dissociation often occurred at the tissue scale. The tissue composition of powdery samples could be modified through dry fractionation diagrams and impact their end-uses properties. If tissue identification is often made on native plant structure, this characterization is not straightforward in destructured samples such powders. Taking advantage of the autofluorescence properties of cell wall components, multispectral image acquisition is envisioned to identify the tissular origin of particles. Images were acquired on maize stem sections and ground tissues isolated from the same stem by hand dissection. The variability in fluorescence intensity profiles was analysed using principal component analysis. The correspondence between fluorescence profiles and the different tissues observed in maize sections was assessed based on histology or known compositional heterogeneity. Similar variability was encountered in fluorescence profiles extracted from powder leading to the potential ability to predict tissular origin based on this autofluorescence multispectral signal.
NASA Astrophysics Data System (ADS)
Zhao, Shaoshuai; Ni, Chen; Cao, Jing; Li, Zhengqiang; Chen, Xingfeng; Ma, Yan; Yang, Leiku; Hou, Weizhen; Qie, Lili; Ge, Bangyu; Liu, Li; Xing, Jin
2018-03-01
The remote sensing image is usually polluted by atmosphere components especially like aerosol particles. For the quantitative remote sensing applications, the radiative transfer model based atmospheric correction is used to get the reflectance with decoupling the atmosphere and surface by consuming a long computational time. The parallel computing is a solution method for the temporal acceleration. The parallel strategy which uses multi-CPU to work simultaneously is designed to do atmospheric correction for a multispectral remote sensing image. The parallel framework's flow and the main parallel body of atmospheric correction are described. Then, the multispectral remote sensing image of the Chinese Gaofen-2 satellite is used to test the acceleration efficiency. When the CPU number is increasing from 1 to 8, the computational speed is also increasing. The biggest acceleration rate is 6.5. Under the 8 CPU working mode, the whole image atmospheric correction costs 4 minutes.
A novel method to detect shadows on multispectral images
NASA Astrophysics Data System (ADS)
Daǧlayan Sevim, Hazan; Yardımcı ćetin, Yasemin; Özışık Başkurt, Didem
2016-10-01
Shadowing occurs when the direct light coming from a light source is obstructed by high human made structures, mountains or clouds. Since shadow regions are illuminated only by scattered light, true spectral properties of the objects are not observed in such regions. Therefore, many object classification and change detection problems utilize shadow detection as a preprocessing step. Besides, shadows are useful for obtaining 3D information of the objects such as estimating the height of buildings. With pervasiveness of remote sensing images, shadow detection is ever more important. This study aims to develop a shadow detection method on multispectral images based on the transformation of C1C2C3 space and contribution of NIR bands. The proposed method is tested on Worldview-2 images covering Ankara, Turkey at different times. The new index is used on these 8-band multispectral images with two NIR bands. The method is compared with methods in the literature.
HERCULES/MSI: a multispectral imager with geolocation for STS-70
NASA Astrophysics Data System (ADS)
Simi, Christopher G.; Kindsfather, Randy; Pickard, Henry; Howard, William, III; Norton, Mark C.; Dixon, Roberta
1995-11-01
A multispectral intensified CCD imager combined with a ring laser gyroscope based inertial measurement unit was flown on the Space Shuttle Discovery from July 13-22, 1995 (Space Transport System Flight No. 70, STS-70). The camera includes a six position filter wheel, a third generation image intensifier, and a CCD camera. The camera is integrated with a laser gyroscope system that determines the ground position of the imagery to an accuracy of better than three nautical miles. The camera has two modes of operation; a panchromatic mode for high-magnification imaging [ground sample distance (GSD) of 4 m], or a multispectral mode consisting of six different user-selectable spectral ranges at reduced magnification (12 m GSD). This paper discusses the system hardware and technical trade-offs involved with camera optimization, and presents imagery observed during the shuttle mission.
Comparing Individual Tree Segmentation Based on High Resolution Multispectral Image and Lidar Data
NASA Astrophysics Data System (ADS)
Xiao, P.; Kelly, M.; Guo, Q.
2014-12-01
This study compares the use of high-resolution multispectral WorldView images and high density Lidar data for individual tree segmentation. The application focuses on coniferous and deciduous forests in the Sierra Nevada Mountains. The tree objects are obtained in two ways: a hybrid region-merging segmentation method with multispectral images, and a top-down and bottom-up region-growing method with Lidar data. The hybrid region-merging method is used to segment individual tree from multispectral images. It integrates the advantages of global-oriented and local-oriented region-merging strategies into a unified framework. The globally most-similar pair of regions is used to determine the starting point of a growing region. The merging iterations are constrained within the local vicinity, thus the segmentation is accelerated and can reflect the local context. The top-down region-growing method is adopted in coniferous forest to delineate individual tree from Lidar data. It exploits the spacing between the tops of trees to identify and group points into a single tree based on simple rules of proximity and likely tree shape. The bottom-up region-growing method based on the intensity and 3D structure of Lidar data is applied in deciduous forest. It segments tree trunks based on the intensity and topological relationships of the points, and then allocate other points to exact tree crowns according to distance. The accuracies for each method are evaluated with field survey data in several test sites, covering dense and sparse canopy. Three types of segmentation results are produced: true positive represents a correctly segmented individual tree, false negative represents a tree that is not detected and assigned to a nearby tree, and false positive represents that a point or pixel cluster is segmented as a tree that does not in fact exist. They respectively represent correct-, under-, and over-segmentation. Three types of index are compared for segmenting individual tree from multispectral image and Lidar data: recall, precision and F-score. This work explores the tradeoff between the expensive Lidar data and inexpensive multispectral image. The conclusion will guide the optimal data selection in different density canopy areas for individual tree segmentation, and contribute to the field of forest remote sensing.
NASA Astrophysics Data System (ADS)
Deán-Ben, X. L.; Bay, Erwin; Razansky, Daniel
2015-03-01
Three-dimensional hand-held optoacoustic imaging comes with important advantages that prompt the clinical translation of this modality, with applications envisioned in cardiovascular and peripheral vascular disease, disorders of the lymphatic system, breast cancer, arthritis or inflammation. Of particular importance is the multispectral acquisition of data by exciting the tissue at several wavelengths, which enables functional imaging applications. However, multispectral imaging of entire three-dimensional regions is significantly challenged by motion artefacts in concurrent acquisitions at different wavelengths. A method based on acquisition of volumetric datasets having a microsecond-level delay between pulses at different wavelengths is described in this work. This method can avoid image artefacts imposed by a scanning velocity greater than 2 m/s, thus, does not only facilitate imaging influenced by respiratory, cardiac or other intrinsic fast movements in living tissues, but can achieve artifact-free imaging in the presence of more significant motion, e.g., abrupt displacements during handheld-mode operation in a clinical environment.
Liu, Changhong; Liu, Wei; Lu, Xuzhong; Ma, Fei; Chen, Wei; Yang, Jianbo; Zheng, Lei
2014-01-01
Multispectral imaging with 19 wavelengths in the range of 405–970 nm has been evaluated for nondestructive determination of firmness, total soluble solids (TSS) content and ripeness stage in strawberry fruit. Several analysis approaches, including partial least squares (PLS), support vector machine (SVM) and back propagation neural network (BPNN), were applied to develop theoretical models for predicting the firmness and TSS of intact strawberry fruit. Compared with PLS and SVM, BPNN considerably improved the performance of multispectral imaging for predicting firmness and total soluble solids content with the correlation coefficient (r) of 0.94 and 0.83, SEP of 0.375 and 0.573, and bias of 0.035 and 0.056, respectively. Subsequently, the ability of multispectral imaging technology to classify fruit based on ripeness stage was tested using SVM and principal component analysis-back propagation neural network (PCA-BPNN) models. The higher classification accuracy of 100% was achieved using SVM model. Moreover, the results of all these models demonstrated that the VIS parts of the spectra were the main contributor to the determination of firmness, TSS content estimation and classification of ripeness stage in strawberry fruit. These results suggest that multispectral imaging, together with suitable analysis model, is a promising technology for rapid estimation of quality attributes and classification of ripeness stage in strawberry fruit. PMID:24505317
Davis, Philip A.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This DS consists of the locally enhanced ALOS image mosaics for each of the 24 mineral project areas (referred to herein as areas of interest), whose locality names, locations, and main mineral occurrences are shown on the index map of Afghanistan (fig. 1). ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency, but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. PRISM image orthorectification for one-half of the target areas was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using SPARKLE logic, which is described in Davis (2006). Each of the four-band images within each resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a specified radius that was usually 500 m. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (either 41 or 42) and the WGS84 datum. Most final image mosaics were subdivided into overlapping tiles or quadrants because of the large size of the target areas. The image tiles (or quadrants) for each area of interest are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Approximately one-half of the study areas have at least one subarea designated for detailed field investigations; the subareas were extracted from the area's image mosaic and are provided as separate embedded geotiff images.
Multi-spectral confocal microendoscope for in-vivo imaging
NASA Astrophysics Data System (ADS)
Rouse, Andrew Robert
The concept of in-vivo multi-spectral confocal microscopy is introduced. A slit-scanning multi-spectral confocal microendoscope (MCME) was built to demonstrate the technique. The MCME employs a flexible fiber-optic catheter coupled to a custom built slit-scan confocal microscope fitted with a custom built imaging spectrometer. The catheter consists of a fiber-optic imaging bundle linked to a miniature objective and focus assembly. The design and performance of the miniature objective and focus assembly are discussed. The 3mm diameter catheter may be used on its own or routed though the instrument channel of a commercial endoscope. The confocal nature of the system provides optical sectioning with 3mum lateral resolution and 30mum axial resolution. The prism based multi-spectral detection assembly is typically configured to collect 30 spectral samples over the visible chromatic range. The spectral sampling rate varies from 4nm/pixel at 490nm to 8nm/pixel at 660nm and the minimum resolvable wavelength difference varies from 7nm to 18nm over the same spectral range. Each of these characteristics are primarily dictated by the dispersive power of the prism. The MCME is designed to examine cellular structures during optical biopsy and to exploit the diagnostic information contained within the spectral domain. The primary applications for the system include diagnosis of disease in the gastro-intestinal tract and female reproductive system. Recent data from the grayscale imaging mode are presented. Preliminary multi-spectral results from phantoms, cell cultures, and excised human tissue are presented to demonstrate the potential of in-vivo multi-spectral imaging.
The Multispectral Imaging Science Working Group. Volume 2: Working group reports
NASA Technical Reports Server (NTRS)
Cox, S. C. (Editor)
1982-01-01
Summaries of the various multispectral imaging science working groups are presented. Current knowledge of the spectral and spatial characteristics of the Earth's surface is outlined and the present and future capabilities of multispectral imaging systems are discussed.
Polarimetric Multispectral Imaging Technology
NASA Technical Reports Server (NTRS)
Cheng, L.-J.; Chao, T.-H.; Dowdy, M.; Mahoney, C.; Reyes, G.
1993-01-01
The Jet Propulsion Laboratory is developing a remote sensing technology on which a new generation of compact, lightweight, high-resolution, low-power, reliable, versatile, programmable scientific polarimetric multispectral imaging instruments can be built to meet the challenge of future planetary exploration missions. The instrument is based on the fast programmable acousto-optic tunable filter (AOTF) of tellurium dioxide (TeO2) that operates in the wavelength range of 0.4-5 microns. Basically, the AOTF multispectral imaging instrument measures incoming light intensity as a function of spatial coordinates, wavelength, and polarization. Its operation can be in either sequential, random access, or multiwavelength mode as required. This provides observation flexibility, allowing real-time alternation among desired observations, collecting needed data only, minimizing data transmission, and permitting implementation of new experiments. These will result in optimization of the mission performance with minimal resources. Recently we completed a polarimetric multispectral imaging prototype instrument and performed outdoor field experiments for evaluating application potentials of the technology. We also investigated potential improvements on AOTF performance to strengthen technology readiness for applications. This paper will give a status report on the technology and a prospect toward future planetary exploration.
Adaptive illumination source for multispectral vision system applied to material discrimination
NASA Astrophysics Data System (ADS)
Conde, Olga M.; Cobo, Adolfo; Cantero, Paulino; Conde, David; Mirapeix, Jesús; Cubillas, Ana M.; López-Higuera, José M.
2008-04-01
A multispectral system based on a monochrome camera and an adaptive illumination source is presented in this paper. Its preliminary application is focused on material discrimination for food and beverage industries, where monochrome, color and infrared imaging have been successfully applied for this task. This work proposes a different approach, in which the relevant wavelengths for the required discrimination task are selected in advance using a Sequential Forward Floating Selection (SFFS) Algorithm. A light source, based on Light Emitting Diodes (LEDs) at these wavelengths is then used to sequentially illuminate the material under analysis, and the resulting images are captured by a CCD camera with spectral response in the entire range of the selected wavelengths. Finally, the several multispectral planes obtained are processed using a Spectral Angle Mapping (SAM) algorithm, whose output is the desired material classification. Among other advantages, this approach of controlled and specific illumination produces multispectral imaging with a simple monochrome camera, and cold illumination restricted to specific relevant wavelengths, which is desirable for the food and beverage industry. The proposed system has been tested with success for the automatic detection of foreign object in the tobacco processing industry.
NASA Astrophysics Data System (ADS)
Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz
2017-04-01
This paper proposes three multisharpening approaches to enhance the spatial resolution of urban hyperspectral remote sensing images. These approaches, related to linear-quadratic spectral unmixing techniques, use a linear-quadratic nonnegative matrix factorization (NMF) multiplicative algorithm. These methods begin by unmixing the observable high-spectral/low-spatial resolution hyperspectral and high-spatial/low-spectral resolution multispectral images. The obtained high-spectral/high-spatial resolution features are then recombined, according to the linear-quadratic mixing model, to obtain an unobservable multisharpened high-spectral/high-spatial resolution hyperspectral image. In the first designed approach, hyperspectral and multispectral variables are independently optimized, once they have been coherently initialized. These variables are alternately updated in the second designed approach. In the third approach, the considered hyperspectral and multispectral variables are jointly updated. Experiments, using synthetic and real data, are conducted to assess the efficiency, in spatial and spectral domains, of the designed approaches and of linear NMF-based approaches from the literature. Experimental results show that the designed methods globally yield very satisfactory spectral and spatial fidelities for the multisharpened hyperspectral data. They also prove that these methods significantly outperform the used literature approaches.
Multispectral computational ghost imaging with multiplexed illumination
NASA Astrophysics Data System (ADS)
Huang, Jian; Shi, Dongfeng
2017-07-01
Computational ghost imaging has attracted wide attention from researchers in many fields over the last two decades. Multispectral imaging as one application of computational ghost imaging possesses spatial and spectral resolving abilities, and is very useful for surveying scenes and extracting detailed information. Existing multispectral imagers mostly utilize narrow band filters or dispersive optical devices to separate light of different wavelengths, and then use multiple bucket detectors or an array detector to record them separately. Here, we propose a novel multispectral ghost imaging method that uses one single bucket detector with multiplexed illumination to produce a colored image. The multiplexed illumination patterns are produced by three binary encoded matrices (corresponding to the red, green and blue colored information, respectively) and random patterns. The results of the simulation and experiment have verified that our method can be effective in recovering the colored object. Multispectral images are produced simultaneously by one single-pixel detector, which significantly reduces the amount of data acquisition.
Study on multispectral imaging detection and recognition
NASA Astrophysics Data System (ADS)
Jun, Wang; Na, Ding; Gao, Jiaobo; Yu, Hu; Jun, Wu; Li, Junna; Zheng, Yawei; Fei, Gao; Sun, Kefeng
2009-07-01
Multispectral imaging detecting technology use target radiation character in spectral spatial distribution and relation between spectral and image to detect target and remote sensing measure. Its speciality is multi channel, narrow bandwidth, large amount of information, high accuracy. The ability of detecting target in environment of clutter, camouflage, concealment and beguilement is improved. At present, spectral imaging technology in the range of multispectral and hyperspectral develop greatly. The multispectral imaging equipment of unmanned aerial vehicle can be used in mine detection, information, surveillance and reconnaissance. Spectral imaging spectrometer operating in MWIR and LWIR has already been applied in the field of remote sensing and military in the advanced country. The paper presents the technology of multispectral imaging. It can enhance the reflectance, scatter and radiation character of the artificial targets among nature background. The targets among complex background and camouflage/stealth targets can be effectively identified. The experiment results and the data of spectral imaging is obtained.
Multispectral Imaging for Determination of Astaxanthin Concentration in Salmonids
Dissing, Bjørn S.; Nielsen, Michael E.; Ersbøll, Bjarne K.; Frosch, Stina
2011-01-01
Multispectral imaging has been evaluated for characterization of the concentration of a specific cartenoid pigment; astaxanthin. 59 fillets of rainbow trout, Oncorhynchus mykiss, were filleted and imaged using a rapid multispectral imaging device for quantitative analysis. The multispectral imaging device captures reflection properties in 19 distinct wavelength bands, prior to determination of the true concentration of astaxanthin. The samples ranged from 0.20 to 4.34 g per g fish. A PLSR model was calibrated to predict astaxanthin concentration from novel images, and showed good results with a RMSEP of 0.27. For comparison a similar model were built for normal color images, which yielded a RMSEP of 0.45. The acquisition speed of the multispectral imaging system and the accuracy of the PLSR model obtained suggest this method as a promising technique for rapid in-line estimation of astaxanthin concentration in rainbow trout fillets. PMID:21573000
Geometric calibration of lens and filter distortions for multispectral filter-wheel cameras.
Brauers, Johannes; Aach, Til
2011-02-01
High-fidelity color image acquisition with a multispectral camera utilizes optical filters to separate the visible electromagnetic spectrum into several passbands. This is often realized with a computer-controlled filter wheel, where each position is equipped with an optical bandpass filter. For each filter wheel position, a grayscale image is acquired and the passbands are finally combined to a multispectral image. However, the different optical properties and non-coplanar alignment of the filters cause image aberrations since the optical path is slightly different for each filter wheel position. As in a normal camera system, the lens causes additional wavelength-dependent image distortions called chromatic aberrations. When transforming the multispectral image with these aberrations into an RGB image, color fringes appear, and the image exhibits a pincushion or barrel distortion. In this paper, we address both the distortions caused by the lens and by the filters. Based on a physical model of the bandpass filters, we show that the aberrations caused by the filters can be modeled by displaced image planes. The lens distortions are modeled by an extended pinhole camera model, which results in a remaining mean calibration error of only 0.07 pixels. Using an absolute calibration target, we then geometrically calibrate each passband and compensate for both lens and filter distortions simultaneously. We show that both types of aberrations can be compensated and present detailed results on the remaining calibration errors.
Airborne multispectral identification of individual cotton plants using consumer-grade cameras
USDA-ARS?s Scientific Manuscript database
Although multispectral remote sensing using consumer-grade cameras has successfully identified fields of small cotton plants, improvements to detection sensitivity are needed to identify individual or small clusters of plants. The imaging sensor of consumer-grade cameras are based on a Bayer patter...
Nondestructive prediction of pork freshness parameters using multispectral scattering images
NASA Astrophysics Data System (ADS)
Tang, Xiuying; Li, Cuiling; Peng, Yankun; Chao, Kuanglin; Wang, Mingwu
2012-05-01
Optical technology is an important and immerging technology for non-destructive and rapid detection of pork freshness. This paper studied on the possibility of using multispectral imaging technique and scattering characteristics to predict the freshness parameters of pork meat. The pork freshness parameters selected for prediction included total volatile basic nitrogen (TVB-N), color parameters (L *, a *, b *), and pH value. Multispectral scattering images were obtained from pork sample surface by a multispectral imaging system developed by ourselves; they were acquired at the selected narrow wavebands whose center wavelengths were 517,550, 560, 580, 600, 760, 810 and 910nm. In order to extract scattering characteristics from multispectral images at multiple wavelengths, a Lorentzian distribution (LD) function with four parameters (a: scattering asymptotic value; b: scattering peak; c: scattering width; d: scattering slope) was used to fit the scattering curves at the selected wavelengths. The results show that the multispectral imaging technique combined with scattering characteristics is promising for predicting the freshness parameters of pork meat.
MOVING BEYOND COLOR: THE CASE FOR MULTISPECTRAL IMAGING IN BRIGHTFIELD PATHOLOGY.
Cukierski, William J; Qi, Xin; Foran, David J
2009-01-01
A multispectral camera is capable of imaging a histologic slide at narrow bandwidths over the range of the visible spectrum. While several uses for multispectral imaging (MSI) have been demonstrated in pathology [1, 2], there is no unified consensus over when and how MSI might benefit automated analysis [3, 4]. In this work, we use a linear-algebra framework to investigate the relationship between the spectral image and its standard-image counterpart. The multispectral "cube" is treated as an extension of a traditional image in a high-dimensional color space. The concept of metamers is introduced and used to derive regions of the visible spectrum where MSI may provide an advantage. Furthermore, histological stains which are amenable to analysis by MSI are reported. We show the Commission internationale de l'éclairage (CIE) 1931 transformation from spectrum to color is non-neighborhood preserving. Empirical results are demonstrated on multispectral images of peripheral blood smears.
Davis, Philip A.; Cagney, Laura E.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Balkhab mineral district, which has copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Balkhab) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the Balkhab area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Balkhab study area, one subarea was designated for detailed field investigations (that is, the Balkhab Prospect subarea); this subarea was extracted from the area's image mosaic and is provided as separate embedded geotiff images.
Davis, Philip A.; Cagney, Laura E.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Katawas mineral district, which has gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©AXA, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Katawas) and the WGS84 datum. The final image mosaics are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Katawas study area, one subarea was designated for detailed field investigation (that is, the Gold subarea); this subarea was extracted from the area's image mosaic and is provided as a separate embedded geotiff image.
Highly Protable Airborne Multispectral Imaging System
NASA Technical Reports Server (NTRS)
Lehnemann, Robert; Mcnamee, Todd
2001-01-01
A portable instrumentation system is described that includes and airborne and a ground-based subsytem. It can acquire multispectral image data over swaths of terrain ranging in width from about 1.5 to 1 km. The system was developed especially for use in coastal environments and is well suited for performing remote sensing and general environmental monitoring. It includes a small,munpilotaed, remotely controlled airplance that carries a forward-looking camera for navigation, three downward-looking monochrome video cameras for imaging terrain in three spectral bands, a video transmitter, and a Global Positioning System (GPS) reciever.
Davis, Philip A.; Cagney, Laura E.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the North Takhar mineral district, which has placer gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for North Takhar) and the WGS84 datum. The final image mosaics were subdivided into nine overlapping tiles or quadrants because of the large size of the target area. The nine image tiles (or quadrants) for the North Takhar area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Cagney, Laura E.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Baghlan mineral district, which has industrial clay and gypsum deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2006, 2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Baghlan) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the Baghlan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Uruzgan mineral district, which has tin and tungsten deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2008, 2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Uruzgan) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Uruzgan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Cagney, Laura E.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the South Helmand mineral district, which has travertine deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2008, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (41 for South Helmand) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the South Helmand area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Cagney, Laura E.; Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Bakhud mineral district, which has industrial fluorite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (41 for Bakhud) and the WGS84 datum. The final image mosaics were subdivided into nine overlapping tiles or quadrants because of the large size of the target area. The nine image tiles (or quadrants) for the Bakhud area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
NASA Astrophysics Data System (ADS)
Klaessens, John H. G. M.; Nelisse, Martin; Verdaasdonk, Rudolf M.; Noordmans, Herke Jan
2013-03-01
During clinical interventions objective and quantitative information of the tissue perfusion, oxygenation or temperature can be useful for the surgical strategy. Local (point) measurements give limited information and affected areas can easily be missed, therefore imaging large areas is required. In this study a LED based multispectral imaging system (MSI, 17 different wavelengths 370nm-880nm) and a thermo camera were applied during clinical interventions: tissue flap transplantations (ENT), local anesthetic block and during open brain surgery (epileptic seizure). The images covered an area of 20x20 cm, when doing measurements in an (operating) room, they turned out to be more complicated than laboratory experiments due to light fluctuations, movement of the patient and limited angle of view. By constantly measuring the background light and the use of a white reference, light fluctuations and movement were corrected. Oxygenation concentration images could be calculated and combined with the thermal images. The effectively of local anesthesia of a hand could be predicted in an early stage using the thermal camera and the reperfusion of transplanted skin flap could be imaged. During brain surgery, a temporary hyper-perfused area was witnessed which was probably related to an epileptic attack. A LED based multispectral imaging system combined with thermal imaging provide complementary information on perfusion and oxygenation changes and are promising techniques for real-time diagnostics during clinical interventions.
Eliminate background interference from latent fingerprints using ultraviolet multispectral imaging
NASA Astrophysics Data System (ADS)
Huang, Wei; Xu, Xiaojing; Wang, Guiqiang
2014-02-01
Fingerprints are the most important evidence in crime scene. The technology of developing latent fingerprints is one of the hottest research areas in forensic science. Recently, multispectral imaging which has shown great capability in fingerprints development, questioned document detection and trace evidence examination is used in detecting material evidence. This paper studied how to eliminate background interference from non-porous and porous surface latent fingerprints by rotating filter wheel ultraviolet multispectral imaging. The results approved that background interference could be removed clearly from latent fingerprints by using multispectral imaging in ultraviolet bandwidth.
Davis, Philip A.; Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the North Bamyan mineral district, which has copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for North Bamyan) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the North Bamyan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Cagney, Laura E.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Takhar mineral district, which has industrial evaporite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Takhar) and the WGS84 datum. The final image mosaics for the Takhar area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Parwan mineral district, which has gold and copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006, 2007), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Parwan) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the North Bamyan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.
2014-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ghazni2 mineral district, which has spectral reflectance anomalies indicative of gold, mercury, and sulfur deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA, 2008, 2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Ghazni2) and the WGS84 datum. The images for the Ghazni2 area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ahankashan mineral district, which has copper and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008, 2009, 2010),but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Ahankashan) and the WGS84 datum. The final image mosaics were subdivided into five overlapping tiles or quadrants because of the large size of the target area. The five image tiles (or quadrants) for the Ahankashan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.
2014-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ghazni1 mineral district, which has spectral reflectance anomalies indicative of clay, aluminum, gold, silver, mercury, and sulfur deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA, 2008, 2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Ghazni1) and the WGS84 datum. The images for the Ghazni1 area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Enhancing hyperspectral spatial resolution using multispectral image fusion: A wavelet approach
NASA Astrophysics Data System (ADS)
Jazaeri, Amin
High spectral and spatial resolution images have a significant impact in remote sensing applications. Because both spatial and spectral resolutions of spaceborne sensors are fixed by design and it is not possible to further increase the spatial or spectral resolution, techniques such as image fusion must be applied to achieve such goals. This dissertation introduces the concept of wavelet fusion between hyperspectral and multispectral sensors in order to enhance the spectral and spatial resolution of a hyperspectral image. To test the robustness of this concept, images from Hyperion (hyperspectral sensor) and Advanced Land Imager (multispectral sensor) were first co-registered and then fused using different wavelet algorithms. A regression-based fusion algorithm was also implemented for comparison purposes. The results show that the fused images using a combined bi-linear wavelet-regression algorithm have less error than other methods when compared to the ground truth. In addition, a combined regression-wavelet algorithm shows more immunity to misalignment of the pixels due to the lack of proper registration. The quantitative measures of average mean square error show that the performance of wavelet-based methods degrades when the spatial resolution of hyperspectral images becomes eight times less than its corresponding multispectral image. Regardless of what method of fusion is utilized, the main challenge in image fusion is image registration, which is also a very time intensive process. Because the combined regression wavelet technique is computationally expensive, a hybrid technique based on regression and wavelet methods was also implemented to decrease computational overhead. However, the gain in faster computation was offset by the introduction of more error in the outcome. The secondary objective of this dissertation is to examine the feasibility and sensor requirements for image fusion for future NASA missions in order to be able to perform onboard image fusion. In this process, the main challenge of image registration was resolved by registering the input images using transformation matrices of previously acquired data. The composite image resulted from the fusion process remarkably matched the ground truth, indicating the possibility of real time onboard fusion processing.
Lossless compression algorithm for multispectral imagers
NASA Astrophysics Data System (ADS)
Gladkova, Irina; Grossberg, Michael; Gottipati, Srikanth
2008-08-01
Multispectral imaging is becoming an increasingly important tool for monitoring the earth and its environment from space borne and airborne platforms. Multispectral imaging data consists of visible and IR measurements from a scene across space and spectrum. Growing data rates resulting from faster scanning and finer spatial and spectral resolution makes compression an increasingly critical tool to reduce data volume for transmission and archiving. Research for NOAA NESDIS has been directed to finding for the characteristics of satellite atmospheric Earth science Imager sensor data what level of Lossless compression ratio can be obtained as well as appropriate types of mathematics and approaches that can lead to approaching this data's entropy level. Conventional lossless do not achieve the theoretical limits for lossless compression on imager data as estimated from the Shannon entropy. In a previous paper, the authors introduce a lossless compression algorithm developed for MODIS as a proxy for future NOAA-NESDIS satellite based Earth science multispectral imagers such as GOES-R. The algorithm is based on capturing spectral correlations using spectral prediction, and spatial correlations with a linear transform encoder. In decompression, the algorithm uses a statistically computed look up table to iteratively predict each channel from a channel decompressed in the previous iteration. In this paper we present a new approach which fundamentally differs from our prior work. In this new approach, instead of having a single predictor for each pair of bands we introduce a piecewise spatially varying predictor which significantly improves the compression results. Our new algorithm also now optimizes the sequence of channels we use for prediction. Our results are evaluated by comparison with a state of the art wavelet based image compression scheme, Jpeg2000. We present results on the 14 channel subset of the MODIS imager, which serves as a proxy for the GOES-R imager. We will also show results of the algorithm for on NOAA AVHRR data and data from SEVIRI. The algorithm is designed to be adapted to the wide range of multispectral imagers and should facilitate distribution of data throughout globally. This compression research is managed by Roger Heymann, PE of OSD NOAA NESDIS Engineering, in collaboration with the NOAA NESDIS STAR Research Office through Mitch Goldberg, Tim Schmit, Walter Wolf.
Fusion of MultiSpectral and Panchromatic Images Based on Morphological Operators.
Restaino, Rocco; Vivone, Gemine; Dalla Mura, Mauro; Chanussot, Jocelyn
2016-04-20
Nonlinear decomposition schemes constitute an alternative to classical approaches for facing the problem of data fusion. In this paper we discuss the application of this methodology to a popular remote sensing application called pansharpening, which consists in the fusion of a low resolution multispectral image and a high resolution panchromatic image. We design a complete pansharpening scheme based on the use of morphological half gradients operators and demonstrate the suitability of this algorithm through the comparison with state of the art approaches. Four datasets acquired by the Pleiades, Worldview-2, Ikonos and Geoeye-1 satellites are employed for the performance assessment, testifying the effectiveness of the proposed approach in producing top-class images with a setting independent of the specific sensor.
NASA Astrophysics Data System (ADS)
Bernat, Amir S.; Bar-Am, Kfir; Cataldo, Leigh; Bolton, Frank J.; Kahn, Bruce S.; Levitz, David
2018-02-01
Cervical cancer is a leading cause of death for women in low resource settings. In order to better detect cervical dysplasia, a low cost multi-spectral colposcope was developed utilizing low costs LEDs and an area scan camera. The device is capable of both traditional colposcopic imaging and multi-spectral image capture. Following initial bench testing, the device was deployed to a gynecology clinic where it was used to image patients in a colposcopy setting. Both traditional colposcopic images and spectral data from patients were uploaded to a cloud server for remote analysis. Multi-spectral imaging ( 30 second capture) took place before any clinical procedure; the standard of care was followed thereafter. If acetic acid was used in the standard of care, a post-acetowhitening colposcopic image was also captured. In analyzing the data, normal and abnormal regions were identified in the colposcopic images by an expert clinician. Spectral data were fit to a theoretical model based on diffusion theory, yielding information on scattering and absorption parameters. Data were grouped according to clinician labeling of the tissue, as well as any additional clinical test results available (Pap, HPV, biopsy). Altogether, N=20 patients were imaged in this study, with 9 of them abnormal. In comparing normal and abnormal regions of interest from patients, substantial differences were measured in blood content, while differences in oxygen saturation parameters were more subtle. These results suggest that optical measurements made using low cost spectral imaging systems can distinguish between normal and pathological tissues.
Multispectral image restoration of historical documents based on LAAMs and mathematical morphology
NASA Astrophysics Data System (ADS)
Lechuga-S., Edwin; Valdiviezo-N., Juan C.; Urcid, Gonzalo
2014-09-01
This research introduces an automatic technique designed for the digital restoration of the damaged parts in historical documents. For this purpose an imaging spectrometer is used to acquire a set of images in the wavelength interval from 400 to 1000 nm. Assuming the presence of linearly mixed spectral pixels registered from the multispectral image, our technique uses two lattice autoassociative memories to extract the set of pure pigments conforming a given document. Through an spectral unmixing analysis, our method produces fractional abundance maps indicating the distributions of each pigment in the scene. These maps are then used to locate cracks and holes in the document under study. The restoration process is performed by the application of a region filling algorithm, based on morphological dilation, followed by a color interpolation to restore the original appearance of the filled areas. This procedure has been successfully applied to the analysis and restoration of three multispectral data sets: two corresponding to artificially superimposed scripts and a real data acquired from a Mexican pre-Hispanic codex, whose restoration results are presented.
Edge-based correlation image registration for multispectral imaging
Nandy, Prabal [Albuquerque, NM
2009-11-17
Registration information for images of a common target obtained from a plurality of different spectral bands can be obtained by combining edge detection and phase correlation. The images are edge-filtered, and pairs of the edge-filtered images are then phase correlated to produce phase correlation images. The registration information can be determined based on these phase correlation images.
NASA Astrophysics Data System (ADS)
Gao, M.; Li, J.
2018-04-01
Geometric correction is an important preprocessing process in the application of GF4 PMS image. The method of geometric correction that is based on the manual selection of geometric control points is time-consuming and laborious. The more common method, based on a reference image, is automatic image registration. This method involves several steps and parameters. For the multi-spectral sensor GF4 PMS, it is necessary for us to identify the best combination of parameters and steps. This study mainly focuses on the following issues: necessity of Rational Polynomial Coefficients (RPC) correction before automatic registration, base band in the automatic registration and configuration of GF4 PMS spatial resolution.
Feasibility study and quality assessment of unmanned aircraft system-derived multispectral images
NASA Astrophysics Data System (ADS)
Chang, Kuo-Jen
2017-04-01
The purpose of study is to explore the precision and the applicability of UAS-derived multispectral images. In this study, the Micro-MCA6 multispectral camera was mounted on quadcopter. The Micro-MCA6 shoot images synchronized of each single band. By means of geotagged images and control points, the orthomosaic images of each single band generated firstly by 14cm resolution. The multispectral image was merged complete with 6 bands. In order to improve the spatial resolution, the 6 band image fused with 9cm resolution image taken from RGB camera. Quality evaluation of the image is verified of the each single band by using control points and check points. The standard deviations of errors are within 1 to 2 pixel resolution of each band. The quality of the multispectral image is compared with 3 cm resolution orthomosaic RGB image gathered from UAV in the same mission, as well. The standard deviations of errors are within 2 to 3 pixel resolution. The result shows that the errors resulting from the blurry and the band dislocation of the objects edge identification. To the end, the normalized difference vegetation index (NDVI) extracted from the image to explore the condition of vegetation and the nature of the environment. This study demonstrates the feasibility and the capability of the high resolution multispectral images.
NASA Technical Reports Server (NTRS)
Matic, Roy M.; Mosley, Judith I.
1994-01-01
Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.
USDA-ARS?s Scientific Manuscript database
The Lower Rio Grande Valley in the south of Texas is experiencing rapid increase of population to bring up urban growth that continues influencing on the irrigation districts in the region. This study evaluated the Landsat satellite multi-spectral imagery to provide information for GIS-based urbaniz...
van den Berg, Nynke S; Buckle, Tessa; KleinJan, Gijs H; van der Poel, Henk G; van Leeuwen, Fijs W B
2017-07-01
During (robot-assisted) sentinel node (SN) biopsy procedures, intraoperative fluorescence imaging can be used to enhance radioguided SN excision. For this combined pre- and intraoperative SN identification was realized using the hybrid SN tracer, indocyanine green- 99m Tc-nanocolloid. Combining this dedicated SN tracer with a lymphangiographic tracer such as fluorescein may further enhance the accuracy of SN biopsy. Clinical evaluation of a multispectral fluorescence guided surgery approach using the dedicated SN tracer ICG- 99m Tc-nanocolloid, the lymphangiographic tracer fluorescein, and a commercially available fluorescence laparoscope. Pilot study in ten patients with prostate cancer. Following ICG- 99m Tc-nanocolloid administration and preoperative lymphoscintigraphy and single-photon emission computed tomograpy imaging, the number and location of SNs were determined. Fluorescein was injected intraprostatically immediately after the patient was anesthetized. A multispectral fluorescence laparoscope was used intraoperatively to identify both fluorescent signatures. Multispectral fluorescence imaging during robot-assisted radical prostatectomy with extended pelvic lymph node dissection and SN biopsy. (1) Number and location of preoperatively identified SNs. (2) Number and location of SNs intraoperatively identified via ICG- 99m Tc-nanocolloid imaging. (3) Rate of intraoperative lymphatic duct identification via fluorescein imaging. (4) Tumor status of excised (sentinel) lymph node(s). (5) Postoperative complications and follow-up. Near-infrared fluorescence imaging of ICG- 99m Tc-nanocolloid visualized 85.3% of the SNs. In 8/10 patients, fluorescein imaging allowed bright and accurate identification of lymphatic ducts, although higher background staining and tracer washout were observed. The main limitation is the small patient population. Our findings indicate that a lymphangiographic tracer can provide additional information during SN biopsy based on ICG- 99m Tc-nanocolloid. The study suggests that multispectral fluorescence image-guided surgery is clinically feasible. We evaluated the concept of surgical fluorescence guidance using differently colored dyes that visualize complementary features. In the future this concept may provide better guidance towards diseased tissue while sparing healthy tissue, and could thus improve functional and oncologic outcomes. Copyright © 2016 European Association of Urology. Published by Elsevier B.V. All rights reserved.
The development of a specialized processor for a space-based multispectral earth imager
NASA Astrophysics Data System (ADS)
Khedr, Mostafa E.
2008-10-01
This work was done in the Department of Computer Engineering, Lvov Polytechnic National University, Lvov, Ukraine, as a thesis entitled "Space Imager Computer System for Raw Video Data Processing" [1]. This work describes the synthesis and practical implementation of a specialized computer system for raw data control and processing onboard a satellite MultiSpectral earth imager. This computer system is intended for satellites with resolution in the range of one meter with 12-bit precession. The design is based mostly on general off-the-shelf components such as (FPGAs) plus custom designed software for interfacing with PC and test equipment. The designed system was successfully manufactured and now fully functioning in orbit.
Initial clinical testing of a multi-spectral imaging system built on a smartphone platform
NASA Astrophysics Data System (ADS)
Mink, Jonah W.; Wexler, Shraga; Bolton, Frank J.; Hummel, Charles; Kahn, Bruce S.; Levitz, David
2016-03-01
Multi-spectral imaging systems are often expensive and bulky. An innovative multi-spectral imaging system was fitted onto a mobile colposcope, an imaging system built around a smartphone in order to image the uterine cervix from outside the body. The multi-spectral mobile colposcope (MSMC) acquires images at different wavelengths. This paper presents the clinical testing of MSMC imaging (technical validation of the MSMC system is described elsewhere 1 ). Patients who were referred to colposcopy following abnormal screening test (Pap or HPV DNA test) according to the standard of care were enrolled. Multi-spectral image sets of the cervix were acquired, consisting of images from the various wavelengths. Image acquisition took 1-2 sec. Areas suspected for dysplasia under white light imaging were biopsied, according to the standard of care. Biopsied sites were recorded on a clockface map of the cervix. Following the procedure, MSMC data was processed from the sites of biopsied sites. To date, the initial histopathological results are still outstanding. Qualitatively, structures in the cervical images were sharper at lower wavelengths than higher wavelengths. Patients tolerated imaging well. The result suggests MSMC holds promise for cervical imaging.
The trophic classification of lakes using ERTS multispectral scanner data
NASA Technical Reports Server (NTRS)
Blackwell, R. J.; Boland, D. H.
1975-01-01
Lake classification methods based on the use of ERTS data are described. Preliminary classification results obtained by multispectral and digital image processing techniques indicate satisfactory correlation between ERTS data and EPA-supplied water analysis. Techniques for determining lake trophic levels using ERTS data are examined, and data obtained for 20 lakes are discussed.
USDA-ARS?s Scientific Manuscript database
Citrus greening or Huanglongbing (HLB) is a devastating disease spread in many citrus groves since first found in 2005 in Florida. Multispectral (MS) and hyperspectral (HS) airborne images of citrus groves in Florida were taken to detect citrus greening infected trees in 2007 and 2010. Ground truthi...
NASA Astrophysics Data System (ADS)
Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz
2015-10-01
In this paper, a new Spectral-Unmixing-based approach, using Nonnegative Matrix Factorization (NMF), is proposed to locally multi-sharpen hyperspectral data by integrating a Digital Surface Model (DSM) obtained from LIDAR data. In this new approach, the nature of the local mixing model is detected by using the local variance of the object elevations. The hyper/multispectral images are explored using small zones. In each zone, the variance of the object elevations is calculated from the DSM data in this zone. This variance is compared to a threshold value and the adequate linear/linearquadratic spectral unmixing technique is used in the considered zone to independently unmix hyperspectral and multispectral data, using an adequate linear/linear-quadratic NMF-based approach. The obtained spectral and spatial information thus respectively extracted from the hyper/multispectral images are then recombined in the considered zone, according to the selected mixing model. Experiments based on synthetic hyper/multispectral data are carried out to evaluate the performance of the proposed multi-sharpening approach and literature linear/linear-quadratic approaches used on the whole hyper/multispectral data. In these experiments, real DSM data are used to generate synthetic data containing linear and linear-quadratic mixed pixel zones. The DSM data are also used for locally detecting the nature of the mixing model in the proposed approach. Globally, the proposed approach yields good spatial and spectral fidelities for the multi-sharpened data and significantly outperforms the used literature methods.
MOVING BEYOND COLOR: THE CASE FOR MULTISPECTRAL IMAGING IN BRIGHTFIELD PATHOLOGY
Cukierski, William J.; Qi, Xin; Foran, David J.
2009-01-01
A multispectral camera is capable of imaging a histologic slide at narrow bandwidths over the range of the visible spectrum. While several uses for multispectral imaging (MSI) have been demonstrated in pathology [1, 2], there is no unified consensus over when and how MSI might benefit automated analysis [3, 4]. In this work, we use a linear-algebra framework to investigate the relationship between the spectral image and its standard-image counterpart. The multispectral “cube” is treated as an extension of a traditional image in a high-dimensional color space. The concept of metamers is introduced and used to derive regions of the visible spectrum where MSI may provide an advantage. Furthermore, histological stains which are amenable to analysis by MSI are reported. We show the Commission internationale de l’éclairage (CIE) 1931 transformation from spectrum to color is non-neighborhood preserving. Empirical results are demonstrated on multispectral images of peripheral blood smears. PMID:19997528
Hybrid Image Fusion for Sharpness Enhancement of Multi-Spectral Lunar Images
NASA Astrophysics Data System (ADS)
Awumah, Anna; Mahanti, Prasun; Robinson, Mark
2016-10-01
Image fusion enhances the sharpness of a multi-spectral (MS) image by incorporating spatial details from a higher-resolution panchromatic (Pan) image [1,2]. Known applications of image fusion for planetary images are rare, although image fusion is well-known for its applications to Earth-based remote sensing. In a recent work [3], six different image fusion algorithms were implemented and their performances were verified with images from the Lunar Reconnaissance Orbiter (LRO) Camera. The image fusion procedure obtained a high-resolution multi-spectral (HRMS) product from the LRO Narrow Angle Camera (used as Pan) and LRO Wide Angle Camera (used as MS) images. The results showed that the Intensity-Hue-Saturation (IHS) algorithm results in a high-spatial quality product while the Wavelet-based image fusion algorithm best preserves spectral quality among all the algorithms. In this work we show the results of a hybrid IHS-Wavelet image fusion algorithm when applied to LROC MS images. The hybrid method provides the best HRMS product - both in terms of spatial resolution and preservation of spectral details. Results from hybrid image fusion can enable new science and increase the science return from existing LROC images.[1] Pohl, Cle, and John L. Van Genderen. "Review article multisensor image fusion in remote sensing: concepts, methods and applications." International journal of remote sensing 19.5 (1998): 823-854.[2] Zhang, Yun. "Understanding image fusion." Photogramm. Eng. Remote Sens 70.6 (2004): 657-661.[3] Mahanti, Prasun et al. "Enhancement of spatial resolution of the LROC Wide Angle Camera images." Archives, XXIII ISPRS Congress Archives (2016).
NASA Astrophysics Data System (ADS)
Matikainen, L.; Karila, K.; Hyyppä, J.; Puttonen, E.; Litkey, P.; Ahokas, E.
2017-10-01
This article summarises our first results and experiences on the use of multispectral airborne laser scanner (ALS) data. Optech Titan multispectral ALS data over a large suburban area in Finland were acquired on three different dates in 2015-2016. We investigated the feasibility of the data from the first date for land cover classification and road mapping. Object-based analyses with segmentation and random forests classification were used. The potential of the data for change detection of buildings and roads was also demonstrated. The overall accuracy of land cover classification results with six classes was 96 % compared with validation points. The data also showed high potential for road detection, road surface classification and change detection. The multispectral intensity information appeared to be very important for automated classifications. Compared to passive aerial images, the intensity images have interesting advantages, such as the lack of shadows. Currently, we focus on analyses and applications with the multitemporal multispectral data. Important questions include, for example, the potential and challenges of the multitemporal data for change detection.
Multispectral Image Processing for Plants
NASA Technical Reports Server (NTRS)
Miles, Gaines E.
1991-01-01
The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.
A Multispectral Micro-Imager for Lunar Field Geology
NASA Technical Reports Server (NTRS)
Nunez, Jorge; Farmer, Jack; Sellar, Glenn; Allen, Carlton
2009-01-01
Field geologists routinely assign rocks to one of three basic petrogenetic categories (igneous, sedimentary or metamorphic) based on microtextural and mineralogical information acquired with a simple magnifying lens. Indeed, such observations often comprise the core of interpretations of geological processes and history. The Multispectral Microscopic Imager (MMI) uses multi-wavelength, light-emitting diodes (LEDs) and a substrate-removed InGaAs focal-plane array to create multispectral, microscale reflectance images of geological samples (FOV 32 X 40 mm). Each pixel (62.5 microns) of an image is comprised of 21 spectral bands that extend from 470 to 1750 nm, enabling the discrimination of a wide variety of rock-forming minerals, especially Fe-bearing phases. MMI images provide crucial context information for in situ robotic analyses using other onboard analytical instruments (e.g. XRD), or for the selection of return samples for analysis in terrestrial labs. To further assess the value of the MMI as a tool for lunar exploration, we used a field-portable, tripod-mounted version of the MMI to image a variety of Apollo samples housed at the Lunar Experiment Laboratory, NASA s Johnson Space Center. MMI images faithfully resolved the microtextural features of samples, while the application of ENVI-based spectral end member mapping methods revealed the distribution of Fe-bearing mineral phases (olivine, pyroxene and magnetite), along with plagioclase feldspars within samples. Samples included a broad range of lithologies and grain sizes. Our MMI-based petrogenetic interpretations compared favorably with thin section-based descriptions published in the Lunar Sample Compendium, revealing the value of MMI images for astronaut and rover-mediated lunar exploration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Virnstein, R.; Tepera, M.; Beazley, L.
1997-06-01
A pilot study is very briefly summarized in the article. The study tested the potential of multi-spectral digital imagery for discrimination of seagrass densities and species, algae, and bottom types. Imagery was obtained with the Compact Airborne Spectral Imager (casi) and two flight lines flown with hyper-spectral mode. The photogrammetric method used allowed interpretation of the highest quality product, eliminating limitations caused by outdated or poor quality base maps and the errors associated with transfer of polygons. Initial image analysis indicates that the multi-spectral imagery has several advantages, including sophisticated spectral signature recognition and classification, ease of geo-referencing, and rapidmore » mosaicking.« less
Visible and Extended Near-Infrared Multispectral Imaging for Skin Cancer Diagnosis
Rey-Barroso, Laura; Burgos-Fernández, Francisco J.; Delpueyo, Xana; Ares, Miguel; Malvehy, Josep; Puig, Susana
2018-01-01
With the goal of diagnosing skin cancer in an early and noninvasive way, an extended near infrared multispectral imaging system based on an InGaAs sensor with sensitivity from 995 nm to 1613 nm was built to evaluate deeper skin layers thanks to the higher penetration of photons at these wavelengths. The outcomes of this device were combined with those of a previously developed multispectral system that works in the visible and near infrared range (414 nm–995 nm). Both provide spectral and spatial information from skin lesions. A classification method to discriminate between melanomas and nevi was developed based on the analysis of first-order statistics descriptors, principal component analysis, and support vector machine tools. The system provided a sensitivity of 78.6% and a specificity of 84.6%, the latter one being improved with respect to that offered by silicon sensors. PMID:29734747
Morphological Feature Extraction for Automatic Registration of Multispectral Images
NASA Technical Reports Server (NTRS)
Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.
2007-01-01
The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.
A multispectral imaging approach for diagnostics of skin pathologies
NASA Astrophysics Data System (ADS)
Lihacova, Ilze; Derjabo, Aleksandrs; Spigulis, Janis
2013-06-01
Noninvasive multispectral imaging method was applied for different skin pathology such as nevus, basal cell carcinoma, and melanoma diagnostics. Developed melanoma diagnostic parameter, using three spectral bands (540 nm, 650 nm and 950 nm), was calculated for nevus, melanoma and basal cell carcinoma. Simple multispectral diagnostic device was established and applied for skin assessment. Development and application of multispectral diagnostics method described further in this article.
Deep multi-spectral ensemble learning for electronic cleansing in dual-energy CT colonography
NASA Astrophysics Data System (ADS)
Tachibana, Rie; Näppi, Janne J.; Hironaka, Toru; Kim, Se Hyung; Yoshida, Hiroyuki
2017-03-01
We developed a novel electronic cleansing (EC) method for dual-energy CT colonography (DE-CTC) based on an ensemble deep convolution neural network (DCNN) and multi-spectral multi-slice image patches. In the method, an ensemble DCNN is used to classify each voxel of a DE-CTC image volume into five classes: luminal air, soft tissue, tagged fecal materials, and partial-volume boundaries between air and tagging and those between soft tissue and tagging. Each DCNN acts as a voxel classifier, where an input image patch centered at the voxel is generated as input to the DCNNs. An image patch has three channels that are mapped from a region-of-interest containing the image plane of the voxel and the two adjacent image planes. Six different types of spectral input image datasets were derived using two dual-energy CT images, two virtual monochromatic images, and two material images. An ensemble DCNN was constructed by use of a meta-classifier that combines the output of multiple DCNNs, each of which was trained with a different type of multi-spectral image patches. The electronically cleansed CTC images were calculated by removal of regions classified as other than soft tissue, followed by a colon surface reconstruction. For pilot evaluation, 359 volumes of interest (VOIs) representing sources of subtraction artifacts observed in current EC schemes were sampled from 30 clinical CTC cases. Preliminary results showed that the ensemble DCNN can yield high accuracy in labeling of the VOIs, indicating that deep learning of multi-spectral EC with multi-slice imaging could accurately remove residual fecal materials from CTC images without generating major EC artifacts.
NASA Astrophysics Data System (ADS)
Broderson, D.; Dierking, C.; Stevens, E.; Heinrichs, T. A.; Cherry, J. E.
2016-12-01
The Geographic Information Network of Alaska (GINA) at the University of Alaska Fairbanks (UAF) uses two direct broadcast antennas to receive data from a number of polar-orbiting weather satellites, including the Suomi National Polar Partnership (S-NPP) satellite. GINA uses data from S-NPP's Visible Infrared Imaging Radiometer Suite (VIIRS) to generate a variety of multispectral imagery products developed with the needs of the National Weather Service operational meteorologist in mind. Multispectral products have two primary advantages over single-channel products. First, they can more clearly highlight some terrain and meteorological features which are less evident in the component single channels. Second, multispectral present the information from several bands through just one image, thereby sparing the meteorologist unnecessary time interrogating the component single bands individually. With 22 channels available from the VIIRS instrument, the number of possible multispectral products is theoretically huge. A small number of products will be emphasized in this presentation, with the products chosen based on their proven utility in the forecasting environment. Multispectral products can be generated upstream of the end user or by the end user at their own workstation. The advantage and disadvantages of both approaches will be outlined. Lastly, the technique of improving the appearance of multispectral imagery by correcting for atmospheric reflectance at the shorter wavelengths will be described.
Davis, Philip A.; Cagney, Laura E.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Nalbandon mineral district, which has lead and zinc deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2007, 2008, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (41 for Nalbandon) and the WGS84 datum. The final image mosaics were subdivided into ten overlapping tiles or quadrants because of the large size of the target area. The ten image tiles (or quadrants) for the Nalbandon area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Nalbandon study area, two subareas were designated for detailed field investigations (that is, the Nalbandon District and Gharghananaw-Gawmazar subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Cagney, Laura E.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Zarkashan mineral district, which has copper and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007, 2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Zarkashan) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the Zarkashan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Zarkashan study area, three subareas were designated for detailed field investigations (that is, the Mine Area, Bolo Gold Prospect, and Luman-Tamaki Gold Prospect subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kandahar mineral district, which has bauxite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2006,2007,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar- elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image- registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative- reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area- enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Kandahar) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Kandahar area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Kandahar study area, two subareas were designated for detailed field investigations (that is, the Obatu-Shela and Sekhab-Zamto Kalay subareas); these subareas were extracted from the area's image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Khanneshin mineral district, which has uranium, thorium, rare-earth-element, and apatite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Khanneshin) and the WGS84 datum. The final image mosaics were subdivided into nine overlapping tiles or quadrants because of the large size of the target area. The nine image tiles (or quadrants) for the Khanneshin area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Khanneshin study area, one subarea was designated for detailed field investigations (that is, the Khanneshin volcano subarea); this subarea was extracted from the area's image mosaic and is provided as separate embedded geotiff images.
Davis, Philip A.; Cagney, Laura E.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Panjsher Valley mineral district, which has emerald and silver-iron deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA, 2009, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Panjsher Valley) and the WGS84 datum. The final image mosaics were subdivided into two overlapping tiles or quadrants because of the large size of the target area. The two image tiles (or quadrants) for the Panjsher Valley area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Panjsher Valley study area, two subareas were designated for detailed field investigations (that is, the Emerald and Silver-Iron subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.
2014-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Farah mineral district, which has spectral reflectance anomalies indicative of copper, zinc, lead, silver, and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA, 2007, 2008, 2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. Therefore, it was necessary to (1) register the 10-m AVNIR multispectral imagery to a well-controlled Landsat image base, (2) mosaic the individual multispectral images into a single image of the entire area of interest, (3) register each panchromatic image to the registered multispectral image base, and (4) mosaic the individual panchromatic images into a single image of the entire area of interest. The two image-registration steps were facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Farah) and the WGS84 datum. The final image mosaics were subdivided into four overlapping tiles or quadrants because of the large size of the target area. The four image tiles (or quadrants) for the Farah area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Farah study area, five subareas were designated for detailed field investigations (that is, the FarahA through FarahE subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
NASA Astrophysics Data System (ADS)
Li, Jiao; Zhang, Songhe; Chekkoury, Andrei; Glasl, Sarah; Vetschera, Paul; Koberstein-Schwarz, Benno; Omar, Murad; Ntziachristos, Vasilis
2017-03-01
Multispectral optoacoustic mesoscopy (MSOM) has been recently introduced for cancer imaging, it has the potential for high resolution imaging of cancer development in vivo, at depths beyond the diffusion limit. Based on spectral features, optoacoustic imaging is capable of visualizing angiogenesis and imaging cancer heterogeneity of malignant tumors through endogenous hemoglobin. However, high-resolution structural and functional imaging of whole tumor mass is limited by modest penetration and image quality, due to the insufficient capability of ultrasound detectors and the twodimensional scan geometry. In this study, we introduce a novel multi-spectral optoacoustic mesoscopy (MSOM) for imaging subcutaneous or orthotopic tumors implanted in lab mice, with the high-frequency ultrasound linear array and a conical scanning geometry. Detailed volumetric images of vasculature and oxygen saturation of tissue in the entire tumors are obtained in vivo, at depths up to 10 mm with the desirable spatial resolutions approaching 70μm. This unprecedented performance enables the visualization of vasculature morphology and hypoxia conditions has been verified with ex vivo studies. These findings demonstrate the potential of MSOM for preclinical oncological studies in deep solid tumors to facilitate the characterization of tumor's angiogenesis and the evaluation of treatment strategies.
NASA Technical Reports Server (NTRS)
Blackwell, R. J.
1982-01-01
Remote sensing data analysis of water quality monitoring is evaluated. Data anaysis and image processing techniques are applied to LANDSAT remote sensing data to produce an effective operational tool for lake water quality surveying and monitoring. Digital image processing and analysis techniques were designed, developed, tested, and applied to LANDSAT multispectral scanner (MSS) data and conventional surface acquired data. Utilization of these techniques facilitates the surveying and monitoring of large numbers of lakes in an operational manner. Supervised multispectral classification, when used in conjunction with surface acquired water quality indicators, is used to characterize water body trophic status. Unsupervised multispectral classification, when interpreted by lake scientists familiar with a specific water body, yields classifications of equal validity with supervised methods and in a more cost effective manner. Image data base technology is used to great advantage in characterizing other contributing effects to water quality. These effects include drainage basin configuration, terrain slope, soil, precipitation and land cover characteristics.
Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kunduz mineral district, which has celestite deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Kunduz) and the WGS84 datum. The final image mosaics were subdivided into five overlapping tiles or quadrants because of the large size of the target area. The five image tiles (or quadrants) for the Kunduz area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Tourmaline mineral district, which has tin deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Tourmaline) and the WGS84 datum. The final image mosaics were subdivided into four overlapping tiles or quadrants because of the large size of the target area. The four image tiles (or quadrants) for the Tourmaline area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Dudkash mineral district, which has industrial mineral deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2006,2007,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Dudkash) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Dudkash area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
NASA Astrophysics Data System (ADS)
Wicaksono, Pramaditya; Salivian Wisnu Kumara, Ignatius; Kamal, Muhammad; Afif Fauzan, Muhammad; Zhafarina, Zhafirah; Agus Nurswantoro, Dwi; Noviaris Yogyantoro, Rifka
2017-12-01
Although spectrally different, seagrass species may not be able to be mapped from multispectral remote sensing images due to the limitation of their spectral resolution. Therefore, it is important to quantitatively assess the possibility of mapping seagrass species using multispectral images by resampling seagrass species spectra to multispectral bands. Seagrass species spectra were measured on harvested seagrass leaves. Spectral resolution of multispectral images used in this research was adopted from WorldView-2, Quickbird, Sentinel-2A, ASTER VNIR, and Landsat 8 OLI. These images are widely available and can be a good representative and baseline for previous or future remote sensing images. Seagrass species considered in this research are Enhalus acoroides (Ea), Thalassodendron ciliatum (Tc), Thalassia hemprichii (Th), Cymodocea rotundata (Cr), Cymodocea serrulata (Cs), Halodule uninervis (Hu), Halodule pinifolia (Hp), Syringodum isoetifolium (Si), Halophila ovalis (Ho), and Halophila minor (Hm). Multispectral resampling analysis indicate that the resampled spectra exhibit similar shape and pattern with the original spectra but less precise, and they lose the unique absorption feature of seagrass species. Relying on spectral bands alone, multispectral image is not effective in mapping these seagrass species individually, which is shown by the poor and inconsistent result of Spectral Angle Mapper (SAM) classification technique in classifying seagrass species using seagrass species spectra as pure endmember. Only Sentinel-2A produced acceptable classification result using SAM.
Clancy, Neil T.; Stoyanov, Danail; James, David R. C.; Di Marco, Aimee; Sauvage, Vincent; Clark, James; Yang, Guang-Zhong; Elson, Daniel S.
2012-01-01
Sequential multispectral imaging is an acquisition technique that involves collecting images of a target at different wavelengths, to compile a spectrum for each pixel. In surgical applications it suffers from low illumination levels and motion artefacts. A three-channel rigid endoscope system has been developed that allows simultaneous recording of stereoscopic and multispectral images. Salient features on the tissue surface may be tracked during the acquisition in the stereo cameras and, using multiple camera triangulation techniques, this information used to align the multispectral images automatically even though the tissue or camera is moving. This paper describes a detailed validation of the set-up in a controlled experiment before presenting the first in vivo use of the device in a porcine minimally invasive surgical procedure. Multispectral images of the large bowel were acquired and used to extract the relative concentration of haemoglobin in the tissue despite motion due to breathing during the acquisition. Using the stereoscopic information it was also possible to overlay the multispectral information on the reconstructed 3D surface. This experiment demonstrates the ability of this system for measuring blood perfusion changes in the tissue during surgery and its potential use as a platform for other sequential imaging modalities. PMID:23082296
Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.; Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Nuristan mineral district, which has gem, lithium, and cesium deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. All available panchromatic images for this area had significant cloud and snow cover that precluded their use for resolution enhancement of the multispectral image data. Each of the four-band images within the 10-m image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Nuristan) and the WGS84 datum. The final image mosaics for the Nuristan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image.
NASA Astrophysics Data System (ADS)
Matsuoka, M.
2012-07-01
A considerable number of methods for pansharpening remote-sensing images have been developed to generate higher spatial resolution multispectral images by the fusion of lower resolution multispectral images and higher resolution panchromatic images. Because pansharpening alters the spectral properties of multispectral images, method selection is one of the key factors influencing the accuracy of subsequent analyses such as land-cover classification or change detection. In this study, seven pixel-based pansharpening methods (additive wavelet intensity, additive wavelet principal component, generalized Laplacian pyramid with spectral distortion minimization, generalized intensity-hue-saturation (GIHS) transform, GIHS adaptive, Gram-Schmidt spectral sharpening, and block-based synthetic variable ratio) were compared using AVNIR-2 and PRISM onboard ALOS from the viewpoint of the preservation of spectral properties of AVNIR-2. A visual comparison was made between pansharpened images generated from spatially degraded AVNIR-2 and original images over urban, agricultural, and forest areas. The similarity of the images was evaluated in terms of the image contrast, the color distinction, and the brightness of the ground objects. In the quantitative assessment, three kinds of statistical indices, correlation coefficient, ERGAS, and Q index, were calculated by band and land-cover type. These scores were relatively superior in bands 2 and 3 compared with the other two bands, especially over urban and agricultural areas. Band 4 showed a strong dependency on the land-cover type. This was attributable to the differences in the observing spectral wavelengths of the sensors and local scene variances.
Liu, Cheng; Li, Shiying; Gu, Yanjuan; Xiong, Huahua; Wong, Wing-Tak; Sun, Lei
2018-05-07
Tumor proteases have been recognized as significant regulators in the tumor microenvironment, but the current strategies for in vivo protease imaging have tended to focus on the development of a probe design rather than the investigation of a novel imaging strategy by leveraging the imaging technique and probe. Herein, it is the first report to investigate the ability of multispectral photoacoustic imaging (PAI) to estimate the distribution of protease cleavage sites inside living tumor tissue by using an activatable photoacoustic (PA) probe. The protease MMP-2 is selected as the target. In this probe, gold nanocages (GNCs) with an absorption peak at ~ 800 nm and fluorescent dye molecules with an absorption peak at ~ 680 nm are conjugated via a specific enzymatic peptide substrate. Upon enzymatic activation by MMP-2, the peptide substrate is cleaved and the chromophores are released. Due to the different retention speeds of large GNCs and small dye molecules, the probe alters its intrinsic absorption profile and produces a distinct change in the PA signal. A multispectral PAI technique that can distinguish different chromophores based on intrinsic PA spectral signatures is applied to estimate the signal composition changes and indicate the cleavage interaction sites. Finally, the multispectral PAI technique with the activatable probe is tested in solution, cultured cells, and a subcutaneous tumor model in vivo. Our experiment in solution with enzyme ± inhibitor, cell culture ± inhibitor, and in vivo tumor model with administration of the developed probe ± inhibitor demonstrated the probe was cleaved by the targeted enzyme. Particularly, the in vivo estimation of the cleavage site distribution was validated with the result of ex vivo immunohistochemistry analysis. This novel synergy of the multispectral PAI technique and the activatable probe is a potential strategy for the distribution estimation of tumor protease activity in vivo.
Farmer, Jack D.; Sellar, R. Glenn; Swayze, Gregg A.; Blaney, Diana L.
2014-01-01
Abstract Future astrobiological missions to Mars are likely to emphasize the use of rovers with in situ petrologic capabilities for selecting the best samples at a site for in situ analysis with onboard lab instruments or for caching for potential return to Earth. Such observations are central to an understanding of the potential for past habitable conditions at a site and for identifying samples most likely to harbor fossil biosignatures. The Multispectral Microscopic Imager (MMI) provides multispectral reflectance images of geological samples at the microscale, where each image pixel is composed of a visible/shortwave infrared spectrum ranging from 0.46 to 1.73 μm. This spectral range enables the discrimination of a wide variety of rock-forming minerals, especially Fe-bearing phases, and the detection of hydrated minerals. The MMI advances beyond the capabilities of current microimagers on Mars by extending the spectral range into the infrared and increasing the number of spectral bands. The design employs multispectral light-emitting diodes and an uncooled indium gallium arsenide focal plane array to achieve a very low mass and high reliability. To better understand and demonstrate the capabilities of the MMI for future surface missions to Mars, we analyzed samples from Mars-relevant analog environments with the MMI. Results indicate that the MMI images faithfully resolve the fine-scale microtextural features of samples and provide important information to help constrain mineral composition. The use of spectral endmember mapping reveals the distribution of Fe-bearing minerals (including silicates and oxides) with high fidelity, along with the presence of hydrated minerals. MMI-based petrogenetic interpretations compare favorably with laboratory-based analyses, revealing the value of the MMI for future in situ rover-mediated astrobiological exploration of Mars. Key Words: Mars—Microscopic imager—Multispectral imaging—Spectroscopy—Habitability—Arm instrument. Astrobiology 14, 132–169. PMID:24552233
Semi-supervised classification tool for DubaiSat-2 multispectral imagery
NASA Astrophysics Data System (ADS)
Al-Mansoori, Saeed
2015-10-01
This paper addresses a semi-supervised classification tool based on a pixel-based approach of the multi-spectral satellite imagery. There are not many studies demonstrating such algorithm for the multispectral images, especially when the image consists of 4 bands (Red, Green, Blue and Near Infrared) as in DubaiSat-2 satellite images. The proposed approach utilizes both unsupervised and supervised classification schemes sequentially to identify four classes in the image, namely, water bodies, vegetation, land (developed and undeveloped areas) and paved areas (i.e. roads). The unsupervised classification concept is applied to identify two classes; water bodies and vegetation, based on a well-known index that uses the distinct wavelengths of visible and near-infrared sunlight that is absorbed and reflected by the plants to identify the classes; this index parameter is called "Normalized Difference Vegetation Index (NDVI)". Afterward, the supervised classification is performed by selecting training homogenous samples for roads and land areas. Here, a precise selection of training samples plays a vital role in the classification accuracy. Post classification is finally performed to enhance the classification accuracy, where the classified image is sieved, clumped and filtered before producing final output. Overall, the supervised classification approach produced higher accuracy than the unsupervised method. This paper shows some current preliminary research results which point out the effectiveness of the proposed technique in a virtual perspective.
Detecting early stage pressure ulcer on dark skin using multispectral imager
NASA Astrophysics Data System (ADS)
Yi, Dingrong; Kong, Linghua; Sprigle, Stephen; Wang, Fengtao; Wang, Chao; Liu, Fuhan; Adibi, Ali; Tummala, Rao
2010-02-01
We are developing a handheld multispectral imaging device to non-invasively inspect stage I pressure ulcers in dark pigmented skins without the need of touching the patient's skin. This paper reports some preliminary test results of using a proof-of-concept prototype. It also talks about the innovation's impact to traditional multispectral imaging technologies and the fields that will potentially benefit from it.
An automated procedure for detection of IDP's dwellings using VHR satellite imagery
NASA Astrophysics Data System (ADS)
Jenerowicz, Malgorzata; Kemper, Thomas; Soille, Pierre
2011-11-01
This paper presents the results for the estimation of dwellings structures in Al Salam IDP Camp, Southern Darfur, based on Very High Resolution multispectral satellite images obtained by implementation of Mathematical Morphology analysis. A series of image processing procedures, feature extraction methods and textural analysis have been applied in order to provide reliable information about dwellings structures. One of the issues in this context is related to similarity of the spectral response of thatched dwellings' roofs and the surroundings in the IDP camps, where the exploitation of multispectral information is crucial. This study shows the advantage of automatic extraction approach and highlights the importance of detailed spatial and spectral information analysis based on multi-temporal dataset. The additional data fusion of high-resolution panchromatic band with lower resolution multispectral bands of WorldView-2 satellite has positive influence on results and thereby can be useful for humanitarian aid agency, providing support of decisions and estimations of population especially in situations when frequent revisits by space imaging system are the only possibility of continued monitoring.
Fuzzy Markov random fields versus chains for multispectral image segmentation.
Salzenstein, Fabien; Collet, Christophe
2006-11-01
This paper deals with a comparison of recent statistical models based on fuzzy Markov random fields and chains for multispectral image segmentation. The fuzzy scheme takes into account discrete and continuous classes which model the imprecision of the hidden data. In this framework, we assume the dependence between bands and we express the general model for the covariance matrix. A fuzzy Markov chain model is developed in an unsupervised way. This method is compared with the fuzzy Markovian field model previously proposed by one of the authors. The segmentation task is processed with Bayesian tools, such as the well-known MPM (Mode of Posterior Marginals) criterion. Our goal is to compare the robustness and rapidity for both methods (fuzzy Markov fields versus fuzzy Markov chains). Indeed, such fuzzy-based procedures seem to be a good answer, e.g., for astronomical observations when the patterns present diffuse structures. Moreover, these approaches allow us to process missing data in one or several spectral bands which correspond to specific situations in astronomy. To validate both models, we perform and compare the segmentation on synthetic images and raw multispectral astronomical data.
NASA Astrophysics Data System (ADS)
Liu, Xin; Samil Yetik, Imam
2012-04-01
Use of multispectral magnetic resonance imaging has received a great interest for prostate cancer localization in research and clinical studies. Manual extraction of prostate tumors from multispectral magnetic resonance imaging is inefficient and subjective, while automated segmentation is objective and reproducible. For supervised, automated segmentation approaches, learning is essential to obtain the information from training dataset. However, in this procedure, all patients are assumed to have similar properties for the tumor and normal tissues, and the segmentation performance suffers since the variations across patients are ignored. To conquer this difficulty, we propose a new iterative normalization method based on relative intensity values of tumor and normal tissues to normalize multispectral magnetic resonance images and improve segmentation performance. The idea of relative intensity mimics the manual segmentation performed by human readers, who compare the contrast between regions without knowing the actual intensity values. We compare the segmentation performance of the proposed method with that of z-score normalization followed by support vector machine, local active contours, and fuzzy Markov random field. Our experimental results demonstrate that our method outperforms the three other state-of-the-art algorithms, and was found to have specificity of 0.73, sensitivity of 0.69, and accuracy of 0.79, significantly better than alternative methods.
Solid state high resolution multi-spectral imager CCD test phase
NASA Technical Reports Server (NTRS)
1973-01-01
The program consisted of measuring the performance characteristics of charge coupled linear imaging devices, and a study defining a multispectral imaging system employing advanced solid state photodetection techniques.
NASA Astrophysics Data System (ADS)
Salsone, Silvia; Taylor, Andrew; Gomez, Juliana; Pretty, Iain; Ellwood, Roger; Dickinson, Mark; Lombardo, Giuseppe; Zakian, Christian
2012-07-01
Near infrared (NIR) multispectral imaging is a novel noninvasive technique that maps and quantifies dental caries. The technique has the ability to reduce the confounding effect of stain present on teeth. The aim of this study was to develop and validate a quantitative NIR multispectral imaging system for caries detection and assessment against a histological reference standard. The proposed technique is based on spectral imaging at specific wavelengths in the range from 1000 to 1700 nm. A total of 112 extracted teeth (molars and premolars) were used and images of occlusal surfaces at different wavelengths were acquired. Three spectral reflectance images were combined to generate a quantitative lesion map of the tooth. The maximum value of the map at the corresponding histological section was used as the NIR caries score. The NIR caries score significantly correlated with the histological reference standard (Spearman's Coefficient=0.774, p<0.01). Caries detection sensitivities and specificities of 72% and 91% for sound areas, 36% and 79% for lesions on the enamel, and 82% and 69% for lesions in dentin were found. These results suggest that NIR spectral imaging is a novel and promising method for the detection, quantification, and mapping of dental caries.
Salsone, Silvia; Taylor, Andrew; Gomez, Juliana; Pretty, Iain; Ellwood, Roger; Dickinson, Mark; Lombardo, Giuseppe; Zakian, Christian
2012-07-01
Near infrared (NIR) multispectral imaging is a novel noninvasive technique that maps and quantifies dental caries. The technique has the ability to reduce the confounding effect of stain present on teeth. The aim of this study was to develop and validate a quantitative NIR multispectral imaging system for caries detection and assessment against a histological reference standard. The proposed technique is based on spectral imaging at specific wavelengths in the range from 1000 to 1700 nm. A total of 112 extracted teeth (molars and premolars) were used and images of occlusal surfaces at different wavelengths were acquired. Three spectral reflectance images were combined to generate a quantitative lesion map of the tooth. The maximum value of the map at the corresponding histological section was used as the NIR caries score. The NIR caries score significantly correlated with the histological reference standard (Spearman's Coefficient=0.774, p<0.01). Caries detection sensitivities and specificities of 72% and 91% for sound areas, 36% and 79% for lesions on the enamel, and 82% and 69% for lesions in dentin were found. These results suggest that NIR spectral imaging is a novel and promising method for the detection, quantification, and mapping of dental caries.
Development and testing of a homogenous multi-wavelength LED light source
NASA Astrophysics Data System (ADS)
Bolton, Frank J.; Bernat, Amir; Jacques, Steven L.; Levitz, David
2017-03-01
Multispectral imaging of human tissue is a powerful method that allows for quantify scattering and absorption parameters of the tissue and differentiate tissue types or identify pathology. This method requires imaging at multiple wavelengths and then fitting the measured data to a model based on light transport theory. Earlier, a mobile phone based multi-spectral imaging system was developed to image the uterine cervix from the colposcopy geometry, outside the patient's body at a distance of 200-300 mm. Such imaging of a distance object has inherent challenges, as bright and homogenous illumination is required. Several solutions addressing this problem were developed, with varied degrees of success. In this paper, several multi-spectral illumination setups were developed and tested for brightness and uniformity. All setups were specifically designed with low cost in mind, utilizing a printed circuit board with surface-mounted LEDs. The three setups include: LEDs illuminating the target directly, LEDs illuminating focused by a 3D printed miniature lens array, and LEDs coupled to a mixing lens and focusing optical system. In order to compare the illumination uniformity and intensity performance two experiments were performed. Test results are presented, and various tradeoffs between the three system configurations are discussed. Test results are presented, and various tradeoffs between the three system configurations are discussed.
Design and fabrication of multispectral optics using expanded glass map
NASA Astrophysics Data System (ADS)
Bayya, Shyam; Gibson, Daniel; Nguyen, Vinh; Sanghera, Jasbinder; Kotov, Mikhail; Drake, Gryphon; Deegan, John; Lindberg, George
2015-06-01
As the desire to have compact multispectral imagers in various DoD platforms is growing, the dearth of multispectral optics is widely felt. With the limited number of material choices for optics, these multispectral imagers are often very bulky and impractical on several weight sensitive platforms. To address this issue, NRL has developed a large set of unique infrared glasses that transmit from 0.9 to > 14 μm in wavelength and expand the glass map for multispectral optics with refractive indices from 2.38 to 3.17. They show a large spread in dispersion (Abbe number) and offer some unique solutions for multispectral optics designs. The new NRL glasses can be easily molded and also fused together to make bonded doublets. A Zemax compatible glass file has been created and is available upon request. In this paper we present some designs, optics fabrication and imaging, all using NRL materials.
Dual-emissive quantum dots for multispectral intraoperative fluorescence imaging.
Chin, Patrick T K; Buckle, Tessa; Aguirre de Miguel, Arantxa; Meskers, Stefan C J; Janssen, René A J; van Leeuwen, Fijs W B
2010-09-01
Fluorescence molecular imaging is rapidly increasing its popularity in image guided surgery applications. To help develop its full surgical potential it remains a challenge to generate dual-emissive imaging agents that allow for combined visible assessment and sensitive camera based imaging. To this end, we now describe multispectral InP/ZnS quantum dots (QDs) that exhibit a bright visible green/yellow exciton emission combined with a long-lived far red defect emission. The intensity of the latter emission was enhanced by X-ray irradiation and allows for: 1) inverted QD density dependent defect emission intensity, showing improved efficacies at lower QD densities, and 2) detection without direct illumination and interference from autofluorescence. Copyright 2010 Elsevier Ltd. All rights reserved.
High-quality infrared imaging with graphene photodetectors at room temperature.
Guo, Nan; Hu, Weida; Jiang, Tao; Gong, Fan; Luo, Wenjin; Qiu, Weicheng; Wang, Peng; Liu, Lu; Wu, Shiwei; Liao, Lei; Chen, Xiaoshuang; Lu, Wei
2016-09-21
Graphene, a two-dimensional material, is expected to enable broad-spectrum and high-speed photodetection because of its gapless band structure, ultrafast carrier dynamics and high mobility. We demonstrate a multispectral active infrared imaging by using a graphene photodetector based on hybrid response mechanisms at room temperature. The high-quality images with optical resolutions of 418 nm, 657 nm and 877 nm and close-to-theoretical-limit Michelson contrasts of 0.997, 0.994, and 0.996 have been acquired for 565 nm, 1550 nm, and 1815 nm light imaging measurements by using an unbiased graphene photodetector, respectively. Importantly, by carefully analyzing the results of Raman mapping and numerical simulations for the response process, the formation of hybrid photocurrents in graphene detectors is attributed to the synergistic action of photovoltaic and photo-thermoelectric effects. The initial application to infrared imaging will help promote the development of high performance graphene-based infrared multispectral detectors.
Semiconductor Laser Multi-Spectral Sensing and Imaging
Le, Han Q.; Wang, Yang
2010-01-01
Multi-spectral laser imaging is a technique that can offer a combination of the laser capability of accurate spectral sensing with the desirable features of passive multispectral imaging. The technique can be used for detection, discrimination, and identification of objects by their spectral signature. This article describes and reviews the development and evaluation of semiconductor multi-spectral laser imaging systems. Although the method is certainly not specific to any laser technology, the use of semiconductor lasers is significant with respect to practicality and affordability. More relevantly, semiconductor lasers have their own characteristics; they offer excellent wavelength diversity but usually with modest power. Thus, system design and engineering issues are analyzed for approaches and trade-offs that can make the best use of semiconductor laser capabilities in multispectral imaging. A few systems were developed and the technique was tested and evaluated on a variety of natural and man-made objects. It was shown capable of high spectral resolution imaging which, unlike non-imaging point sensing, allows detecting and discriminating objects of interest even without a priori spectroscopic knowledge of the targets. Examples include material and chemical discrimination. It was also shown capable of dealing with the complexity of interpreting diffuse scattered spectral images and produced results that could otherwise be ambiguous with conventional imaging. Examples with glucose and spectral imaging of drug pills were discussed. Lastly, the technique was shown with conventional laser spectroscopy such as wavelength modulation spectroscopy to image a gas (CO). These results suggest the versatility and power of multi-spectral laser imaging, which can be practical with the use of semiconductor lasers. PMID:22315555
Semiconductor laser multi-spectral sensing and imaging.
Le, Han Q; Wang, Yang
2010-01-01
Multi-spectral laser imaging is a technique that can offer a combination of the laser capability of accurate spectral sensing with the desirable features of passive multispectral imaging. The technique can be used for detection, discrimination, and identification of objects by their spectral signature. This article describes and reviews the development and evaluation of semiconductor multi-spectral laser imaging systems. Although the method is certainly not specific to any laser technology, the use of semiconductor lasers is significant with respect to practicality and affordability. More relevantly, semiconductor lasers have their own characteristics; they offer excellent wavelength diversity but usually with modest power. Thus, system design and engineering issues are analyzed for approaches and trade-offs that can make the best use of semiconductor laser capabilities in multispectral imaging. A few systems were developed and the technique was tested and evaluated on a variety of natural and man-made objects. It was shown capable of high spectral resolution imaging which, unlike non-imaging point sensing, allows detecting and discriminating objects of interest even without a priori spectroscopic knowledge of the targets. Examples include material and chemical discrimination. It was also shown capable of dealing with the complexity of interpreting diffuse scattered spectral images and produced results that could otherwise be ambiguous with conventional imaging. Examples with glucose and spectral imaging of drug pills were discussed. Lastly, the technique was shown with conventional laser spectroscopy such as wavelength modulation spectroscopy to image a gas (CO). These results suggest the versatility and power of multi-spectral laser imaging, which can be practical with the use of semiconductor lasers.
Band co-registration modeling of LAPAN-A3/IPB multispectral imager based on satellite attitude
NASA Astrophysics Data System (ADS)
Hakim, P. R.; Syafrudin, A. H.; Utama, S.; Jayani, A. P. S.
2018-05-01
One of significant geometric distortion on images of LAPAN-A3/IPB multispectral imager is co-registration error between each color channel detector. Band co-registration distortion usually can be corrected by using several approaches, which are manual method, image matching algorithm, or sensor modeling and calibration approach. This paper develops another approach to minimize band co-registration distortion on LAPAN-A3/IPB multispectral image by using supervised modeling of image matching with respect to satellite attitude. Modeling results show that band co-registration error in across-track axis is strongly influenced by yaw angle, while error in along-track axis is fairly influenced by both pitch and roll angle. Accuracy of the models obtained is pretty good, which lies between 1-3 pixels error for each axis of each pair of band co-registration. This mean that the model can be used to correct the distorted images without the need of slower image matching algorithm, nor the laborious effort needed in manual approach and sensor calibration. Since the calculation can be executed in order of seconds, this approach can be used in real time quick-look image processing in ground station or even in satellite on-board image processing.
Multi Texture Analysis of Colorectal Cancer Continuum Using Multispectral Imagery
Chaddad, Ahmad; Desrosiers, Christian; Bouridane, Ahmed; Toews, Matthew; Hassan, Lama; Tanougast, Camel
2016-01-01
Purpose This paper proposes to characterize the continuum of colorectal cancer (CRC) using multiple texture features extracted from multispectral optical microscopy images. Three types of pathological tissues (PT) are considered: benign hyperplasia, intraepithelial neoplasia and carcinoma. Materials and Methods In the proposed approach, the region of interest containing PT is first extracted from multispectral images using active contour segmentation. This region is then encoded using texture features based on the Laplacian-of-Gaussian (LoG) filter, discrete wavelets (DW) and gray level co-occurrence matrices (GLCM). To assess the significance of textural differences between PT types, a statistical analysis based on the Kruskal-Wallis test is performed. The usefulness of texture features is then evaluated quantitatively in terms of their ability to predict PT types using various classifier models. Results Preliminary results show significant texture differences between PT types, for all texture features (p-value < 0.01). Individually, GLCM texture features outperform LoG and DW features in terms of PT type prediction. However, a higher performance can be achieved by combining all texture features, resulting in a mean classification accuracy of 98.92%, sensitivity of 98.12%, and specificity of 99.67%. Conclusions These results demonstrate the efficiency and effectiveness of combining multiple texture features for characterizing the continuum of CRC and discriminating between pathological tissues in multispectral images. PMID:26901134
NASA Technical Reports Server (NTRS)
Realmuto, V. J.; Sutton, A. J.; Elias, T.
1996-01-01
The synoptic perspective and rapid mode of data acquisition provided by remote sensing are well-suited for the study of volcanic SO2 plumes. In this paper we describe a plume-mapping procedure that is based on image data acquired with NASA's airborne Thermal Infrared Multispectral Scanner (TIMS).
Multispectral laser imaging for advanced food analysis
NASA Astrophysics Data System (ADS)
Senni, L.; Burrascano, P.; Ricci, M.
2016-07-01
A hardware-software apparatus for food inspection capable of realizing multispectral NIR laser imaging at four different wavelengths is herein discussed. The system was designed to operate in a through-transmission configuration to detect the presence of unwanted foreign bodies inside samples, whether packed or unpacked. A modified Lock-In technique was employed to counterbalance the significant signal intensity attenuation due to transmission across the sample and to extract the multispectral information more efficiently. The NIR laser wavelengths used to acquire the multispectral images can be varied to deal with different materials and to focus on specific aspects. In the present work the wavelengths were selected after a preliminary analysis to enhance the image contrast between foreign bodies and food in the sample, thus identifying the location and nature of the defects. Experimental results obtained from several specimens, with and without packaging, are presented and the multispectral image processing as well as the achievable spatial resolution of the system are discussed.
A method for operative quantitative interpretation of multispectral images of biological tissues
NASA Astrophysics Data System (ADS)
Lisenko, S. A.; Kugeiko, M. M.
2013-10-01
A method for operative retrieval of spatial distributions of biophysical parameters of a biological tissue by using a multispectral image of it has been developed. The method is based on multiple regressions between linearly independent components of the diffuse reflection spectrum of the tissue and unknown parameters. Possibilities of the method are illustrated by an example of determining biophysical parameters of the skin (concentrations of melanin, hemoglobin and bilirubin, blood oxygenation, and scattering coefficient of the tissue). Examples of quantitative interpretation of the experimental data are presented.
NASA Astrophysics Data System (ADS)
Poobalasubramanian, Mangalraj; Agrawal, Anupam
2016-10-01
The presented work proposes fusion of panchromatic and multispectral images in a shearlet domain. The proposed fusion rules rely on the regional considerations which makes the system efficient in terms of spatial enhancement. The luminance hue saturation-based color conversion system is utilized to avoid spectral distortions. The proposed fusion method is tested on Worldview2 and Ikonos datasets, and the proposed method is compared against other methodologies. The proposed fusion method performs well against the other compared methods in terms of subjective and objective evaluations.
Landsat 8 Multispectral and Pansharpened Imagery Processing on the Study of Civil Engineering Issues
NASA Astrophysics Data System (ADS)
Lazaridou, M. A.; Karagianni, A. Ch.
2016-06-01
Scientific and professional interests of civil engineering mainly include structures, hydraulics, geotechnical engineering, environment, and transportation issues. Topics included in the context of the above may concern urban environment issues, urban planning, hydrological modelling, study of hazards and road construction. Land cover information contributes significantly on the study of the above subjects. Land cover information can be acquired effectively by visual image interpretation of satellite imagery or after applying enhancement routines and also by imagery classification. The Landsat Data Continuity Mission (LDCM - Landsat 8) is the latest satellite in Landsat series, launched in February 2013. Landsat 8 medium spatial resolution multispectral imagery presents particular interest in extracting land cover, because of the fine spectral resolution, the radiometric quantization of 12bits, the capability of merging the high resolution panchromatic band of 15 meters with multispectral imagery of 30 meters as well as the policy of free data. In this paper, Landsat 8 multispectral and panchromatic imageries are being used, concerning surroundings of a lake in north-western Greece. Land cover information is extracted, using suitable digital image processing software. The rich spectral context of the multispectral image is combined with the high spatial resolution of the panchromatic image, applying image fusion - pansharpening, facilitating in this way visual image interpretation to delineate land cover. Further processing concerns supervised image classification. The classification of pansharpened image preceded multispectral image classification. Corresponding comparative considerations are also presented.
Clinical evaluation of melanomas and common nevi by spectral imaging
Diebele, Ilze; Kuzmina, Ilona; Lihachev, Alexey; Kapostinsh, Janis; Derjabo, Alexander; Valeine, Lauma; Spigulis, Janis
2012-01-01
A clinical trial on multi-spectral imaging of malignant and non-malignant skin pathologies comprising 17 melanomas and 65 pigmented common nevi was performed. Optical density data of skin pathologies were obtained in the spectral range 450–950 nm using the multispectral camera Nuance EX. An image parameter and maps capable of distinguishing melanoma from pigmented nevi were proposed. The diagnostic criterion is based on skin optical density differences at three fixed wavelengths: 540nm, 650nm and 950nm. The sensitivity and specificity of this method were estimated to be 94% and 89%, respectively. The proposed methodology and potential clinical applications are discussed. PMID:22435095
Multispectral imaging approach for simplified non-invasive in-vivo evaluation of gingival erythema
NASA Astrophysics Data System (ADS)
Eckhard, Timo; Valero, Eva M.; Nieves, Juan L.; Gallegos-Rueda, José M.; Mesa, Francisco
2012-03-01
Erythema is a common visual sign of gingivitis. In this work, a new and simple low-cost image capture and analysis method for erythema assessment is proposed. The method is based on digital still images of gingivae and applied on a pixel-by-pixel basis. Multispectral images are acquired with a conventional digital camera and multiplexed LED illumination panels at 460nm and 630nm peak wavelength. An automatic work-flow segments teeth from gingiva regions in the images and creates a map of local blood oxygenation levels, which relates to the presence of erythema. The map is computed from the ratio of the two spectral images. An advantage of the proposed approach is that the whole process is easy to manage by dental health care professionals in clinical environment.
Theory on data processing and instrumentation. [remote sensing
NASA Technical Reports Server (NTRS)
Billingsley, F. C.
1978-01-01
A selection of NASA Earth observations programs are reviewed, emphasizing hardware capabilities. Sampling theory, noise and detection considerations, and image evaluation are discussed for remote sensor imagery. Vision and perception are considered, leading to numerical image processing. The use of multispectral scanners and of multispectral data processing systems, including digital image processing, is depicted. Multispectral sensing and analysis in application with land use and geographical data systems are also covered.
Skin condition measurement by using multispectral imaging system (Conference Presentation)
NASA Astrophysics Data System (ADS)
Jung, Geunho; Kim, Sungchul; Kim, Jae Gwan
2017-02-01
There are a number of commercially available low level light therapy (LLLT) devices in a market, and face whitening or wrinkle reduction is one of targets in LLLT. The facial improvement could be known simply by visual observation of face, but it cannot provide either quantitative data or recognize a subtle change. Clinical diagnostic instruments such as mexameter can provide a quantitative data, but it costs too high for home users. Therefore, we designed a low cost multi-spectral imaging device by adding additional LEDs (470nm, 640nm, white LED, 905nm) to a commercial USB microscope which has two LEDs (395nm, 940nm) as light sources. Among various LLLT skin treatments, we focused on getting melanin and wrinkle information. For melanin index measurements, multi-spectral images of nevus were acquired and melanin index values from color image (conventional method) and from multi-spectral images were compared. The results showed that multi-spectral analysis of melanin index can visualize nevus with a different depth and concentration. A cross section of wrinkle on skin resembles a wedge which can be a source of high frequency components when the skin image is Fourier transformed into a spatial frequency domain map. In that case, the entropy value of the spatial frequency map can represent the frequency distribution which is related with the amount and thickness of wrinkle. Entropy values from multi-spectral images can potentially separate the percentage of thin and shallow wrinkle from thick and deep wrinkle. From the results, we found that this low cost multi-spectral imaging system could be beneficial for home users of LLLT by providing the treatment efficacy in a quantitative way.
Multispectral Wavefronts Retrieval in Digital Holographic Three-Dimensional Imaging Spectrometry
NASA Astrophysics Data System (ADS)
Yoshimori, Kyu
2010-04-01
This paper deals with a recently developed passive interferometric technique for retrieving a set of spectral components of wavefronts that are propagating from a spatially incoherent, polychromatic object. The technique is based on measurement of 5-D spatial coherence function using a suitably designed interferometer. By applying signal processing, including aperture synthesis and spectral decomposition, one may obtains a set of wavefronts of different spectral bands. Since each wavefront is equivalent to the complex Fresnel hologram at a particular spectrum of the polychromatic object, application of the conventional Fresnel transform yields 3-D image of different spectrum. Thus, this technique of multispectral wavefronts retrieval provides a new type of 3-D imaging spectrometry based on a fully passive interferometry. Experimental results are also shown to demonstrate the validity of the method.
Nunez, Jorge; Farmer, Jack; Sellar, R. Glenn; Swayze, Gregg A.; Blaney, Diana L.
2014-01-01
Future astrobiological missions to Mars are likely to emphasize the use of rovers with in situ petrologic capabilities for selecting the best samples at a site for in situ analysis with onboard lab instruments or for caching for potential return to Earth. Such observations are central to an understanding of the potential for past habitable conditions at a site and for identifying samples most likely to harbor fossil biosignatures. The Multispectral Microscopic Imager (MMI) provides multispectral reflectance images of geological samples at the microscale, where each image pixel is composed of a visible/shortwave infrared spectrum ranging from 0.46 to 1.73 μm. This spectral range enables the discrimination of a wide variety of rock-forming minerals, especially Fe-bearing phases, and the detection of hydrated minerals. The MMI advances beyond the capabilities of current microimagers on Mars by extending the spectral range into the infrared and increasing the number of spectral bands. The design employs multispectral light-emitting diodes and an uncooled indium gallium arsenide focal plane array to achieve a very low mass and high reliability. To better understand and demonstrate the capabilities of the MMI for future surface missions to Mars, we analyzed samples from Mars-relevant analog environments with the MMI. Results indicate that the MMI images faithfully resolve the fine-scale microtextural features of samples and provide important information to help constrain mineral composition. The use of spectral endmember mapping reveals the distribution of Fe-bearing minerals (including silicates and oxides) with high fidelity, along with the presence of hydrated minerals. MMI-based petrogenetic interpretations compare favorably with laboratory-based analyses, revealing the value of the MMI for future in situ rover-mediated astrobiological exploration of Mars.
Zhang, Dongyan; Zhou, Xingen; Zhang, Jian; Lan, Yubin; Xu, Chao; Liang, Dong
2018-01-01
Detection and monitoring are the first essential step for effective management of sheath blight (ShB), a major disease in rice worldwide. Unmanned aerial systems have a high potential of being utilized to improve this detection process since they can reduce the time needed for scouting for the disease at a field scale, and are affordable and user-friendly in operation. In this study, a commercialized quadrotor unmanned aerial vehicle (UAV), equipped with digital and multispectral cameras, was used to capture imagery data of research plots with 67 rice cultivars and elite lines. Collected imagery data were then processed and analyzed to characterize the development of ShB and quantify different levels of the disease in the field. Through color features extraction and color space transformation of images, it was found that the color transformation could qualitatively detect the infected areas of ShB in the field plots. However, it was less effective to detect different levels of the disease. Five vegetation indices were then calculated from the multispectral images, and ground truths of disease severity and GreenSeeker measured NDVI (Normalized Difference Vegetation Index) were collected. The results of relationship analyses indicate that there was a strong correlation between ground-measured NDVIs and image-extracted NDVIs with the R2 of 0.907 and the root mean square error (RMSE) of 0.0854, and a good correlation between image-extracted NDVIs and disease severity with the R2 of 0.627 and the RMSE of 0.0852. Use of image-based NDVIs extracted from multispectral images could quantify different levels of ShB in the field plots with an accuracy of 63%. These results demonstrate that a customer-grade UAV integrated with digital and multispectral cameras can be an effective tool to detect the ShB disease at a field scale.
Color standardization and optimization in whole slide imaging.
Yagi, Yukako
2011-03-30
Standardization and validation of the color displayed by digital slides is an important aspect of digital pathology implementation. While the most common reason for color variation is the variance in the protocols and practices in the histology lab, the color displayed can also be affected by variation in capture parameters (for example, illumination and filters), image processing and display factors in the digital systems themselves. We have been developing techniques for color validation and optimization along two paths. The first was based on two standard slides that are scanned and displayed by the imaging system in question. In this approach, one slide is embedded with nine filters with colors selected especially for H&E stained slides (looking like tiny Macbeth color chart); the specific color of the nine filters were determined in our previous study and modified for whole slide imaging (WSI). The other slide is an H&E stained mouse embryo. Both of these slides were scanned and the displayed images were compared to a standard. The second approach was based on our previous multispectral imaging research. As a first step, the two slide method (above) was used to identify inaccurate display of color and its cause, and to understand the importance of accurate color in digital pathology. We have also improved the multispectral-based algorithm for more consistent results in stain standardization. In near future, the results of the two slide and multispectral techniques can be combined and will be widely available. We have been conducting a series of researches and developing projects to improve image quality to establish Image Quality Standardization. This paper discusses one of most important aspects of image quality - color.
Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Dusar-Shaida mineral district, which has copper and tin deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’ picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’ local zone (41 for Dusar-Shaida) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Dusar-Shaida area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Dusar-Shaida study area, three subareas were designated for detailed field investigations (that is, the Dahana-Misgaran, Kaftar VMS, and Shaida subareas); these subareas were extracted from the area’ image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kundalyan mineral district, which has porphyry copper and gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Kundalyan) and the WGS84 datum. The final image mosaics were subdivided into five overlapping tiles or quadrants because of the large size of the target area. The five image tiles (or quadrants) for the Kundalyan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Kundalyan study area, three subareas were designated for detailed field investigations (that is, the Baghawan-Garangh, Charsu-Ghumbad, and Kunag Skarn subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Herat mineral district, which has barium and limestone deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 1,000-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Herat) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Herat area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Herat study area, one subarea was designated for detailed field investigations (that is, the Barium-Limestone subarea); this subarea was extracted from the area's image mosaic and is provided as separate embedded geotiff images.
Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Badakhshan mineral district, which has gold deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2007,2008), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Badakhshan) and the WGS84 datum. The final image mosaics were subdivided into six overlapping tiles or quadrants because of the large size of the target area. The six image tiles (or quadrants) for the Badakhshan area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Badakhshan study area, three subareas were designated for detailed field investigations (that is, the Bharak, Fayz-Abad, and Ragh subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Kharnak-Kanjar mineral district, which has mercury deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2007,2008,2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 1,000-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (41 for Kharnak-Kanjar) and the WGS84 datum. The final image mosaics were subdivided into eight overlapping tiles or quadrants because of the large size of the target area. The eight image tiles (or quadrants) for the Kharnak-Kanjar area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Kharnak-Kanjar study area, three subareas were designated for detailed field investigations (that is, the Koh-e-Katif Passaband, Panjshah-Mullayan, and Sahebdad-Khanjar subareas); these subareas were extracted from the area's image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Haji-Gak mineral district, which has iron ore deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420-500 nanometer, nm), green (520-600 nm), red (610-690 nm), and near-infrared (760-890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520-770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2006,2007), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then co-registered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image-coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band's picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area's local zone (42 for Haji-Gak) and the WGS84 datum. The final image mosaics were subdivided into three overlapping tiles or quadrants because of the large size of the target area. The three image tiles (or quadrants) for the Haji-Gak area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Haji-Gak study area, three subareas were designated for detailed field investigations (that is, the Haji-Gak Prospect, Farenjal, and NE Haji-Gak subareas); these subareas were extracted from the area's image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Cagney, Laura E.; Arko, Scott A.; Harbin, Michelle L.
2012-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Aynak mineral district, which has copper deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency ((c)JAXA,2008,2010), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 315-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Aynak) and the WGS84 datum. The final image mosaics were subdivided into four overlapping tiles or quadrants because of the large size of the target area. The four image tiles (or quadrants) for the Aynak area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Aynak study area, five subareas were designated for detailed field investigations (that is, the Bakhel-Charwaz, Kelaghey-Kakhay, Kharuti-Dawrankhel, Logar Valley, and Yagh-Darra/Gul-Darra subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
Davis, Philip A.; Arko, Scott A.; Harbin, Michelle L.; Davis, Philip A.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Defense Task Force for Business and Stability Operations, prepared databases for mineral-resource target areas in Afghanistan. The purpose of the databases is to (1) provide useful data to ground-survey crews for use in performing detailed assessments of the areas and (2) provide useful information to private investors who are considering investment in a particular area for development of its natural resources. The set of satellite-image mosaics provided in this Data Series (DS) is one such database. Although airborne digital color-infrared imagery was acquired for parts of Afghanistan in 2006, the image data have radiometric variations that preclude their use in creating a consistent image mosaic for geologic analysis. Consequently, image mosaics were created using ALOS (Advanced Land Observation Satellite; renamed Daichi) satellite images, whose radiometry has been well determined (Saunier, 2007a,b). This part of the DS consists of the locally enhanced ALOS image mosaics for the Ghunday-Achin mineral district, which has magnesite and talc deposits. ALOS was launched on January 24, 2006, and provides multispectral images from the AVNIR (Advanced Visible and Near-Infrared Radiometer) sensor in blue (420–500 nanometer, nm), green (520–600 nm), red (610–690 nm), and near-infrared (760–890 nm) wavelength bands with an 8-bit dynamic range and a 10-meter (m) ground resolution. The satellite also provides a panchromatic band image from the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor (520–770 nm) with the same dynamic range but a 2.5-m ground resolution. The image products in this DS incorporate copyrighted data provided by the Japan Aerospace Exploration Agency (©JAXA,2008,2009), but the image processing has altered the original pixel structure and all image values of the JAXA ALOS data, such that original image values cannot be recreated from this DS. As such, the DS products match JAXA criteria for value added products, which are not copyrighted, according to the ALOS end-user license agreement. The selection criteria for the satellite imagery used in our mosaics were images having (1) the highest solar-elevation angles (near summer solstice) and (2) the least cloud, cloud-shadow, and snow cover. The multispectral and panchromatic data were orthorectified with ALOS satellite ephemeris data, a process which is not as accurate as orthorectification using digital elevation models (DEMs); however, the ALOS processing center did not have a precise DEM. As a result, the multispectral and panchromatic image pairs were generally not well registered to the surface and not coregistered well enough to perform resolution enhancement on the multispectral data. For this particular area, PRISM image orthorectification was performed by the Alaska Satellite Facility, applying its photogrammetric software to PRISM stereo images with vertical control points obtained from the digital elevation database produced by the Shuttle Radar Topography Mission (Farr and others, 2007) and horizontal adjustments based on a controlled Landsat image base (Davis, 2006). The 10-m AVNIR multispectral imagery was then coregistered to the orthorectified PRISM images and individual multispectral and panchromatic images were mosaicked into single images of the entire area of interest. The image coregistration was facilitated using an automated control-point algorithm developed by the USGS that allows image coregistration to within one picture element. Before rectification, the multispectral and panchromatic images were converted to radiance values and then to relative-reflectance values using the methods described in Davis (2006). Mosaicking the multispectral or panchromatic images started with the image with the highest sun-elevation angle and the least atmospheric scattering, which was treated as the standard image. The band-reflectance values of all other multispectral or panchromatic images within the area were sequentially adjusted to that of the standard image by determining band-reflectance correspondence between overlapping images using linear least-squares analysis. The resolution of the multispectral image mosaic was then increased to that of the panchromatic image mosaic using the SPARKLE logic, which is described in Davis (2006). Each of the four-band images within the resolution-enhanced image mosaic was individually subjected to a local-area histogram stretch algorithm (described in Davis, 2007), which stretches each band’s picture element based on the digital values of all picture elements within a 500-m radius. The final databases, which are provided in this DS, are three-band, color-composite images of the local-area-enhanced, natural-color data (the blue, green, and red wavelength bands) and color-infrared data (the green, red, and near-infrared wavelength bands). All image data were initially projected and maintained in Universal Transverse Mercator (UTM) map projection using the target area’s local zone (42 for Ghunday-Achin) and the WGS84 datum. The final image mosaics were subdivided into six overlapping tiles or quadrants because of the large size of the target area. The six image tiles (or quadrants) for the Ghunday-Achin area are provided as embedded geotiff images, which can be read and used by most geographic information system (GIS) and image-processing software. The tiff world files (tfw) are provided, even though they are generally not needed for most software to read an embedded geotiff image. Within the Ghunday-Achin study area, two subareas were designated for detailed field investigations (that is, the Achin-Magnesite and Ghunday-Mamahel subareas); these subareas were extracted from the area’s image mosaic and are provided as separate embedded geotiff images.
NASA Astrophysics Data System (ADS)
Bell, J. F.; Fraeman, A. A.; Grossman, L.; Herkenhoff, K. E.; Sullivan, R. J.; Mer/Athena Science Team
2010-12-01
The Mars Exploration Rovers Spirit and Opportunity have enabled more than six and a half years of detailed, in situ field study of two specific landing sites and traverse paths within Gusev crater and Meridiani Planum, respectively. Much of the study has relied on high-resolution, multispectral imaging of fine-grained regolith components--the dust, sand, cobbles, clasts, and other components collectively referred to as "soil"--at both sites using the rovers' Panoramic Camera (Pancam) and Microscopic Imager (MI) imaging systems. As of early September 2010, the Pancam systems have acquired more than 1300 and 1000 "13 filter" multispectral imaging sequences of surfaces in Gusev and Meridiani, respectively, with each sequence consisting of co-located images at 11 unique narrowband wavelengths between 430 nm and 1009 nm and having a maximum spatial resolution of about 500 microns per pixel. The MI systems have acquired more than 5900 and 6500 monochromatic images, respectively, at about 31 microns per pixel scale. Pancam multispectral image cubes are calibrated to radiance factor (I/F, where I is the measured radiance and π*F is the incident solar irradiance) using observations of the onboard calibration targets, and then corrected to relative reflectance (assuming Lambertian photometric behavior) for comparison with laboratory rock and mineral measurements. Specifically, Pancam spectra can be used to detect the possible presence of some iron-bearing minerals (e.g., some ferric oxides/oxyhydroxides and pyroxenes) as well as structural water or OH in some hydrated alteration products, providing important inputs on the choice of targets for more quantitative compositional and mineralogic follow-up using the rover's other in situ and remote sensing analysis tools. Pancam 11-band spectra are being analyzed using a variety of standard as well as specifically-tailored analysis methods, including color ratio and band depth parameterizations, spectral similarity and principal components clustering, and simple visual inspection based on correlations with false color unit boundaries and textural variations seen in both Pancam and MI imaging. Approximately 20 distinct spectral classes of fine-grained surface components were identified at each site based on these methods. In this presentation we describe these spectral classes, their geologic and textural context and distribution based on supporting high-res MI and other Pancam imaging, and their potential compositional/mineralogic interpretations based on a variety of rover data sets.
NASA Astrophysics Data System (ADS)
Taruttis, Adrian; Razansky, Daniel; Ntziachristos, Vasilis
2012-02-01
Optoacoustic imaging has enabled the visualization of optical contrast at high resolutions in deep tissue. Our Multispectral optoacoustic tomography (MSOT) imaging results reveal internal tissue heterogeneity, where the underlying distribution of specific endogenous and exogenous sources of absorption can be resolved in detail. Technical advances in cardiac imaging allow motion-resolved multispectral measurements of the heart, opening the way for studies of cardiovascular disease. We further demonstrate the fast characterization of the pharmacokinetic profiles of lightabsorbing agents. Overall, our MSOT findings indicate new possibilities in high resolution imaging of functional and molecular parameters.
Change detection of bitemporal multispectral images based on FCM and D-S theory
NASA Astrophysics Data System (ADS)
Shi, Aiye; Gao, Guirong; Shen, Shaohong
2016-12-01
In this paper, we propose a change detection method of bitemporal multispectral images based on the D-S theory and fuzzy c-means (FCM) algorithm. Firstly, the uncertainty and certainty regions are determined by thresholding method applied to the magnitudes of difference image (MDI) and spectral angle information (SAI) of bitemporal images. Secondly, the FCM algorithm is applied to the MDI and SAI in the uncertainty region, respectively. Then, the basic probability assignment (BPA) functions of changed and unchanged classes are obtained by the fuzzy membership values from the FCM algorithm. In addition, the optimal value of fuzzy exponent of FCM is adaptively determined by conflict degree between the MDI and SAI in uncertainty region. Finally, the D-S theory is applied to obtain the new fuzzy partition matrix for uncertainty region and further the change map is obtained. Experiments on bitemporal Landsat TM images and bitemporal SPOT images validate that the proposed method is effective.
Microbolometer spectrometer opens hoist of new applications
NASA Astrophysics Data System (ADS)
Leijtens, J.; Smorenburg, C.; Escudero, I.; Boslooper, E.; Visser, H.; Helden, W. v.; Breussin, F.
2017-11-01
Current Thermal infra red ( 7..14μm) multispectral imager instruments use cryogenically cooled Mercury Cadmium Telluride (MCT or HgCdTe) detectors. This causes the instruments to be bulky, power hungry and expensive. For systems that have medium NETD (Noise Equivalent Temperature Difference) requirements and can operate with high speed optics (<1.5), room temperature microbolometer performance has increased enough to enable people to design multispectral instruments based on this new detector technology. Because microbolometer technology has been driven by the military need for inexpensive, reliable and small thermal imagers, microbolometer based detectors are almost exclusively available in 2D format, and performance is still increasing. Building a spectrometer for the 7 to 12 μm wavelength region using microbolometers has been discarded until now, based on the expected NETD performance. By optimising the throughput of the optical system, and using the latest improvements in detector performance, TNO TPD has been able to design a spectrometer that is able to provide co-registered measurements in the 7 to 12 μm wavelength region yielding acceptable NETD performance. Apart from the usual multispectral imaging, the concept can be used for several other applications, among which imaging in both the 3 to 5 and 7 to 12 μm atmospheric windows at the same time (forest fire detection and military recognisance) or wideband flame analysis (Nox detection in industrial ovens).
NASA Astrophysics Data System (ADS)
Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke
2008-08-01
A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.
NASA Astrophysics Data System (ADS)
Chen, Maomao; Zhou, Yuan; Su, Han; Zhang, Dong; Luo, Jianwen
2017-04-01
Imaging of the pharmacokinetic parameters in dynamic fluorescence molecular tomography (DFMT) can provide three-dimensional metabolic information for biological studies and drug development. However, owing to the ill-posed nature of the FMT inverse problem, the relatively low quality of the parametric images makes it difficult to investigate the different metabolic processes of the fluorescent targets with small distances. An excitation-resolved multispectral DFMT method is proposed; it is based on the fact that the fluorescent targets with different concentrations show different variations in the excitation spectral domain and can be considered independent signal sources. With an independent component analysis method, the spatial locations of different fluorescent targets can be decomposed, and the fluorescent yields of the targets at different time points can be recovered. Therefore, the metabolic process of each component can be independently investigated. Simulations and phantom experiments are carried out to evaluate the performance of the proposed method. The results demonstrated that the proposed excitation-resolved multispectral method can effectively improve the reconstruction accuracy of the parametric images in DFMT.
NASA Astrophysics Data System (ADS)
Chen, Buxin; Zhang, Zheng; Sidky, Emil Y.; Xia, Dan; Pan, Xiaochuan
2017-11-01
Optimization-based algorithms for image reconstruction in multispectral (or photon-counting) computed tomography (MCT) remains a topic of active research. The challenge of optimization-based image reconstruction in MCT stems from the inherently non-linear data model that can lead to a non-convex optimization program for which no mathematically exact solver seems to exist for achieving globally optimal solutions. In this work, based upon a non-linear data model, we design a non-convex optimization program, derive its first-order-optimality conditions, and propose an algorithm to solve the program for image reconstruction in MCT. In addition to consideration of image reconstruction for the standard scan configuration, the emphasis is on investigating the algorithm’s potential for enabling non-standard scan configurations with no or minimum hardware modification to existing CT systems, which has potential practical implications for lowered hardware cost, enhanced scanning flexibility, and reduced imaging dose/time in MCT. Numerical studies are carried out for verification of the algorithm and its implementation, and for a preliminary demonstration and characterization of the algorithm in reconstructing images and in enabling non-standard configurations with varying scanning angular range and/or x-ray illumination coverage in MCT.
USDA-ARS?s Scientific Manuscript database
Line-scan-based hyperspectral imaging techniques have often served as a research tool to develop rapid multispectral methods based on only a few spectral bands for rapid online applications. With continuing technological advances and greater accessibility to and availability of optoelectronic imagin...
The Multispectral Imaging Science Working Group. Volume 3: Appendices
NASA Technical Reports Server (NTRS)
Cox, S. C. (Editor)
1982-01-01
The status and technology requirements for using multispectral sensor imagery in geographic, hydrologic, and geologic applications are examined. Critical issues in image and information science are identified.
Fast and accurate image recognition algorithms for fresh produce food safety sensing
NASA Astrophysics Data System (ADS)
Yang, Chun-Chieh; Kim, Moon S.; Chao, Kuanglin; Kang, Sukwon; Lefcourt, Alan M.
2011-06-01
This research developed and evaluated the multispectral algorithms derived from hyperspectral line-scan fluorescence imaging under violet LED excitation for detection of fecal contamination on Golden Delicious apples. The algorithms utilized the fluorescence intensities at four wavebands, 680 nm, 684 nm, 720 nm, and 780 nm, for computation of simple functions for effective detection of contamination spots created on the apple surfaces using four concentrations of aqueous fecal dilutions. The algorithms detected more than 99% of the fecal spots. The effective detection of feces showed that a simple multispectral fluorescence imaging algorithm based on violet LED excitation may be appropriate to detect fecal contamination on fast-speed apple processing lines.
NASA Technical Reports Server (NTRS)
1982-01-01
Evaluating of the combined utility of narrowband and multispectral imaging in both the infrared and visible for the lithologic identification of geologic materials, and of the combined utility of multispectral imaging in the visible and infrared for lithologic mapping on a global bases are near term recommendations for future imaging capabilities. Long term recommendations include laboratory research into methods of field sampling and theoretical models of microscale mixing. The utility of improved spatial and spectral resolutions and radiometric sensitivity is also suggested for the long term. Geobotanical remote sensing research should be conducted to (1) separate geological and botanical spectral signatures in individual picture elements; (2) study geobotanical correlations that more fully simulate natural conditions; and use test sites designed to test specific geobotanical hypotheses.
Sandison, David R.; Platzbecker, Mark R.; Descour, Michael R.; Armour, David L.; Craig, Marcus J.; Richards-Kortum, Rebecca
1999-01-01
A multispectral imaging probe delivers a range of wavelengths of excitation light to a target and collects a range of expressed light wavelengths. The multispectral imaging probe is adapted for mobile use and use in confined spaces, and is sealed against the effects of hostile environments. The multispectral imaging probe comprises a housing that defines a sealed volume that is substantially sealed from the surrounding environment. A beam splitting device mounts within the sealed volume. Excitation light is directed to the beam splitting device, which directs the excitation light to a target. Expressed light from the target reaches the beam splitting device along a path coaxial with the path traveled by the excitation light from the beam splitting device to the target. The beam splitting device directs expressed light to a collection subsystem for delivery to a detector.
Sandison, D.R.; Platzbecker, M.R.; Descour, M.R.; Armour, D.L.; Craig, M.J.; Richards-Kortum, R.
1999-07-27
A multispectral imaging probe delivers a range of wavelengths of excitation light to a target and collects a range of expressed light wavelengths. The multispectral imaging probe is adapted for mobile use and use in confined spaces, and is sealed against the effects of hostile environments. The multispectral imaging probe comprises a housing that defines a sealed volume that is substantially sealed from the surrounding environment. A beam splitting device mounts within the sealed volume. Excitation light is directed to the beam splitting device, which directs the excitation light to a target. Expressed light from the target reaches the beam splitting device along a path coaxial with the path traveled by the excitation light from the beam splitting device to the target. The beam splitting device directs expressed light to a collection subsystem for delivery to a detector. 8 figs.
2013-01-01
In vivo quantitative assessment of skin lesions is an important step in the evaluation of skin condition. An objective measurement device can help as a valuable tool for skin analysis. We propose an explorative new multispectral camera specifically developed for dermatology/cosmetology applications. The multispectral imaging system provides images of skin reflectance at different wavebands covering visible and near-infrared domain. It is coupled with a neural network-based algorithm for the reconstruction of reflectance cube of cutaneous data. This cube contains only skin optical reflectance spectrum in each pixel of the bidimensional spatial information. The reflectance cube is analyzed by an algorithm based on a Kubelka-Munk model combined with evolutionary algorithm. The technique allows quantitative measure of cutaneous tissue and retrieves five skin parameter maps: melanin concentration, epidermis/dermis thickness, haemoglobin concentration, and the oxygenated hemoglobin. The results retrieved on healthy participants by the algorithm are in good accordance with the data from the literature. The usefulness of the developed technique was proved during two experiments: a clinical study based on vitiligo and melasma skin lesions and a skin oxygenation experiment (induced ischemia) with healthy participant where normal tissues are recorded at normal state and when temporary ischemia is induced. PMID:24159326
Adaptive Image Processing Methods for Improving Contaminant Detection Accuracy on Poultry Carcasses
USDA-ARS?s Scientific Manuscript database
Technical Abstract A real-time multispectral imaging system has demonstrated a science-based tool for fecal and ingesta contaminant detection during poultry processing. In order to implement this imaging system at commercial poultry processing industry, the false positives must be removed. For doi...
Parallel evolution of image processing tools for multispectral imagery
NASA Astrophysics Data System (ADS)
Harvey, Neal R.; Brumby, Steven P.; Perkins, Simon J.; Porter, Reid B.; Theiler, James P.; Young, Aaron C.; Szymanski, John J.; Bloch, Jeffrey J.
2000-11-01
We describe the implementation and performance of a parallel, hybrid evolutionary-algorithm-based system, which optimizes image processing tools for feature-finding tasks in multi-spectral imagery (MSI) data sets. Our system uses an integrated spatio-spectral approach and is capable of combining suitably-registered data from different sensors. We investigate the speed-up obtained by parallelization of the evolutionary process via multiple processors (a workstation cluster) and develop a model for prediction of run-times for different numbers of processors. We demonstrate our system on Landsat Thematic Mapper MSI , covering the recent Cerro Grande fire at Los Alamos, NM, USA.
NASA Astrophysics Data System (ADS)
Cheng, Jun-Hu; Jin, Huali; Liu, Zhiwei
2018-01-01
The feasibility of developing a multispectral imaging method using important wavelengths from hyperspectral images selected by genetic algorithm (GA), successive projection algorithm (SPA) and regression coefficient (RC) methods for modeling and predicting protein content in peanut kernel was investigated for the first time. Partial least squares regression (PLSR) calibration model was established between the spectral data from the selected optimal wavelengths and the reference measured protein content ranged from 23.46% to 28.43%. The RC-PLSR model established using eight key wavelengths (1153, 1567, 1972, 2143, 2288, 2339, 2389 and 2446 nm) showed the best predictive results with the coefficient of determination of prediction (R2P) of 0.901, and root mean square error of prediction (RMSEP) of 0.108 and residual predictive deviation (RPD) of 2.32. Based on the obtained best model and image processing algorithms, the distribution maps of protein content were generated. The overall results of this study indicated that developing a rapid and online multispectral imaging system using the feature wavelengths and PLSR analysis is potential and feasible for determination of the protein content in peanut kernels.
NASA Astrophysics Data System (ADS)
Teffahi, Hanane; Yao, Hongxun; Belabid, Nasreddine; Chaib, Souleyman
2018-02-01
The satellite images with very high spatial resolution have been recently widely used in image classification topic as it has become challenging task in remote sensing field. Due to a number of limitations such as the redundancy of features and the high dimensionality of the data, different classification methods have been proposed for remote sensing images classification particularly the methods using feature extraction techniques. This paper propose a simple efficient method exploiting the capability of extended multi-attribute profiles (EMAP) with sparse autoencoder (SAE) for remote sensing image classification. The proposed method is used to classify various remote sensing datasets including hyperspectral and multispectral images by extracting spatial and spectral features based on the combination of EMAP and SAE by linking them to kernel support vector machine (SVM) for classification. Experiments on new hyperspectral image "Huston data" and multispectral image "Washington DC data" shows that this new scheme can achieve better performance of feature learning than the primitive features, traditional classifiers and ordinary autoencoder and has huge potential to achieve higher accuracy for classification in short running time.
Multispectral imaging method and apparatus
Sandison, D.R.; Platzbecker, M.R.; Vargo, T.D.; Lockhart, R.R.; Descour, M.R.; Richards-Kortum, R.
1999-07-06
A multispectral imaging method and apparatus are described which are adapted for use in determining material properties, especially properties characteristic of abnormal non-dermal cells. A target is illuminated with a narrow band light beam. The target expresses light in response to the excitation. The expressed light is collected and the target's response at specific response wavelengths to specific excitation wavelengths is measured. From the measured multispectral response the target's properties can be determined. A sealed, remote probe and robust components can be used for cervical imaging. 5 figs.
Multispectral imaging method and apparatus
Sandison, David R.; Platzbecker, Mark R.; Vargo, Timothy D.; Lockhart, Randal R.; Descour, Michael R.; Richards-Kortum, Rebecca
1999-01-01
A multispectral imaging method and apparatus adapted for use in determining material properties, especially properties characteristic of abnormal non-dermal cells. A target is illuminated with a narrow band light beam. The target expresses light in response to the excitation. The expressed light is collected and the target's response at specific response wavelengths to specific excitation wavelengths is measured. From the measured multispectral response the target's properties can be determined. A sealed, remote probe and robust components can be used for cervical imaging
NASA Astrophysics Data System (ADS)
McMackin, Lenore; Herman, Matthew A.; Weston, Tyler
2016-02-01
We present the design of a multi-spectral imager built using the architecture of the single-pixel camera. The architecture is enabled by the novel sampling theory of compressive sensing implemented optically using the Texas Instruments DLP™ micro-mirror array. The array not only implements spatial modulation necessary for compressive imaging but also provides unique diffractive spectral features that result in a multi-spectral, high-spatial resolution imager design. The new camera design provides multi-spectral imagery in a wavelength range that extends from the visible to the shortwave infrared without reduction in spatial resolution. In addition to the compressive imaging spectrometer design, we present a diffractive model of the architecture that allows us to predict a variety of detailed functional spatial and spectral design features. We present modeling results, architectural design and experimental results that prove the concept.
Airado-Rodríguez, Diego; Høy, Martin; Skaret, Josefine; Wold, Jens Petter
2014-05-01
The potential of multispectral imaging of autofluorescence to map sensory flavour properties and fluorophore concentrations in cod caviar paste has been investigated. Cod caviar paste was used as a case product and it was stored over time, under different headspace gas composition and light exposure conditions, to obtain a relevant span in lipid oxidation and sensory properties. Samples were divided in two sets, calibration and test sets, with 16 and 7 samples, respectively. A third set of samples was prepared with induced gradients in lipid oxidation and sensory properties by light exposure of certain parts of the sample surface. Front-face fluorescence emission images were obtained for excitation wavelength 382 nm at 11 different channels ranging from 400 to 700 nm. The analysis of the obtained sets of images was divided in two parts: First, in an effort to compress and extract relevant information, multivariate curve resolution was applied on the calibration set and three spectral components and their relative concentrations in each sample were obtained. The obtained profiles were employed to estimate the concentrations of each component in the images of the heterogeneous samples, giving chemical images of the distribution of fluorescent oxidation products, protoporphyrin IX and photoprotoporphyrin. Second, regression models for sensory attributes related to lipid oxidation were constructed based on the spectra of homogeneous samples from the calibration set. These models were successfully validated with the test set. The models were then applied for pixel-wise estimation of sensory flavours in the heterogeneous images, giving rise to sensory images. As far as we know this is the first time that sensory images of odour and flavour are obtained based on multispectral imaging. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Matsui, Daichi; Ishii, Katsunori; Awazu, Kunio
2015-07-01
Atherosclerosis is a primary cause of critical ischemic diseases like heart infarction or stroke. A method that can provide detailed information about the stability of atherosclerotic plaques is required. We focused on spectroscopic techniques that could evaluate the chemical composition of lipid in plaques. A novel angioscope using multispectral imaging at wavelengths around 1200 nm for quantitative evaluation of atherosclerotic plaques was developed. The angioscope consists of a halogen lamp, an indium gallium arsenide (InGaAs) camera, 3 optical band pass filters transmitting wavelengths of 1150, 1200, and 1300 nm, an image fiber having 0.7 mm outer diameter, and an irradiation fiber which consists of 7 multimode fibers. Atherosclerotic plaque phantoms with 100, 60, 20 vol.% of lipid were prepared and measured by the multispectral angioscope. The acquired datasets were processed by spectral angle mapper (SAM) method. As a result, simulated plaque areas in atherosclerotic plaque phantoms that could not be detected by an angioscopic visible image could be clearly enhanced. In addition, quantitative evaluation of atherosclerotic plaque phantoms based on the lipid volume fractions was performed up to 20 vol.%. These results show the potential of a multispectral angioscope at wavelengths around 1200 nm for quantitative evaluation of the stability of atherosclerotic plaques.
USDA-ARS?s Scientific Manuscript database
In this study, we developed a nondestructive method for discriminating viable cucumber (Cucumis sativus) seeds based on hyperspectral fluorescence imaging. The fluorescence spectra of cucumber seeds in the 420–700 nm range were extracted from hyperspectral fluorescence images obtained using 365 nm u...
Generalization of the Lyot filter and its application to snapshot spectral imaging.
Gorman, Alistair; Fletcher-Holmes, David William; Harvey, Andrew Robert
2010-03-15
A snapshot multi-spectral imaging technique is described which employs multiple cascaded birefringent interferometers to simultaneously spectrally filter and demultiplex multiple spectral images onto a single detector array. Spectral images are recorded directly without the need for inversion and without rejection of light and so the technique offers the potential for high signal-to-noise ratio. An example of an eight-band multi-spectral movie sequence is presented; we believe this is the first such demonstration of a technique able to record multi-spectral movie sequences without the need for computer reconstruction.
Development of a Portable 3CCD Camera System for Multispectral Imaging of Biological Samples
Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S.
2014-01-01
Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510
Application of LC and LCoS in Multispectral Polarized Scene Projector (MPSP)
NASA Astrophysics Data System (ADS)
Yu, Haiping; Guo, Lei; Wang, Shenggang; Lippert, Jack; Li, Le
2017-02-01
A Multispectral Polarized Scene Projector (MPSP) had been developed in the short-wave infrared (SWIR) regime for the test & evaluation (T&E) of spectro-polarimetric imaging sensors. This MPSP generates multispectral and hyperspectral video images (up to 200 Hz) with 512×512 spatial resolution with active spatial, spectral, and polarization modulation with controlled bandwidth. It projects input SWIR radiant intensity scenes from stored memory with user selectable wavelength and bandwidth, as well as polarization states (six different states) controllable on a pixel level. The spectral contents are implemented by a tunable filter with variable bandpass built based on liquid crystal (LC) material, together with one passive visible and one passive SWIR cholesteric liquid crystal (CLC) notch filters, and one switchable CLC notch filter. The core of the MPSP hardware is the liquid-crystal-on-silicon (LCoS) spatial light modulators (SLMs) for intensity control and polarization modulation.
Single sensor that outputs narrowband multispectral images
Kong, Linghua; Yi, Dingrong; Sprigle, Stephen; Wang, Fengtao; Wang, Chao; Liu, Fuhan; Adibi, Ali; Tummala, Rao
2010-01-01
We report the work of developing a hand-held (or miniaturized), low-cost, stand-alone, real-time-operation, narrow bandwidth multispectral imaging device for the detection of early stage pressure ulcers. PMID:20210418
Cloud-based processing of multi-spectral imaging data
NASA Astrophysics Data System (ADS)
Bernat, Amir S.; Bolton, Frank J.; Weiser, Reuven; Levitz, David
2017-03-01
Multispectral imaging holds great promise as a non-contact tool for the assessment of tissue composition. Performing multi - spectral imaging on a hand held mobile device would allow to bring this technology and with it knowledge to low resource settings to provide a state of the art classification of tissue health. This modality however produces considerably larger data sets than white light imaging and requires preliminary image analysis for it to be used. The data then needs to be analyzed and logged, while not requiring too much of the system resource or a long computation time and battery use by the end point device. Cloud environments were designed to allow offloading of those problems by allowing end point devices (smartphones) to offload computationally hard tasks. For this end we present a method where the a hand held device based around a smartphone captures a multi - spectral dataset in a movie file format (mp4) and compare it to other image format in size, noise and correctness. We present the cloud configuration used for segmenting images to frames where they can later be used for further analysis.
Low SWaP multispectral sensors using dichroic filter arrays
NASA Astrophysics Data System (ADS)
Dougherty, John; Varghese, Ron
2015-06-01
The benefits of multispectral imaging are well established in a variety of applications including remote sensing, authentication, satellite and aerial surveillance, machine vision, biomedical, and other scientific and industrial uses. However, many of the potential solutions require more compact, robust, and cost-effective cameras to realize these benefits. The next generation of multispectral sensors and cameras needs to deliver improvements in size, weight, power, portability, and spectral band customization to support widespread deployment for a variety of purpose-built aerial, unmanned, and scientific applications. A novel implementation uses micro-patterning of dichroic filters1 into Bayer and custom mosaics, enabling true real-time multispectral imaging with simultaneous multi-band image acquisition. Consistent with color image processing, individual spectral channels are de-mosaiced with each channel providing an image of the field of view. This approach can be implemented across a variety of wavelength ranges and on a variety of detector types including linear, area, silicon, and InGaAs. This dichroic filter array approach can also reduce payloads and increase range for unmanned systems, with the capability to support both handheld and autonomous systems. Recent examples and results of 4 band RGB + NIR dichroic filter arrays in multispectral cameras are discussed. Benefits and tradeoffs of multispectral sensors using dichroic filter arrays are compared with alternative approaches - including their passivity, spectral range, customization options, and scalable production.
Li, Changqing; Zhao, Hongzhi; Anderson, Bonnie; Jiang, Huabei
2006-03-01
We describe a compact diffuse optical tomography system specifically designed for breast imaging. The system consists of 64 silicon photodiode detectors, 64 excitation points, and 10 diode lasers in the near-infrared region, allowing multispectral, three-dimensional optical imaging of breast tissue. We also detail the system performance and optimization through a calibration procedure. The system is evaluated using tissue-like phantom experiments and an in vivo clinic experiment. Quantitative two-dimensional (2D) and three-dimensional (3D) images of absorption and reduced scattering coefficients are obtained from these experiments. The ten-wavelength spectra of the extracted reduced scattering coefficient enable quantitative morphological images to be reconstructed with this system. From the in vivo clinic experiment, functional images including deoxyhemoglobin, oxyhemoglobin, and water concentration are recovered and tumors are detected with correct size and position compared with the mammography.
Photographic techniques for enhancing ERTS MSS data for geologic information
NASA Technical Reports Server (NTRS)
Yost, E.; Geluso, W.; Anderson, R.
1974-01-01
Satellite multispectral black-and-white photographic negatives of Luna County, New Mexico, obtained by ERTS on 15 August and 2 September 1973, were precisely reprocessed into positive images and analyzed in an additive color viewer. In addition, an isoluminous (uniform brightness) color rendition of the image was constructed. The isoluminous technique emphasizes subtle differences between multispectral bands by greatly enhancing the color of the superimposed composite of all bands and eliminating the effects of brightness caused by sloping terrain. Basaltic lava flows were more accurately displayed in the precision processed multispectral additive color ERTS renditions than on existing state geological maps. Malpais lava flows and small basaltic occurrences not appearing on existing geological maps were identified in ERTS multispectral color images.
Retinex Preprocessing for Improved Multi-Spectral Image Classification
NASA Technical Reports Server (NTRS)
Thompson, B.; Rahman, Z.; Park, S.
2000-01-01
The goal of multi-image classification is to identify and label "similar regions" within a scene. The ability to correctly classify a remotely sensed multi-image of a scene is affected by the ability of the classification process to adequately compensate for the effects of atmospheric variations and sensor anomalies. Better classification may be obtained if the multi-image is preprocessed before classification, so as to reduce the adverse effects of image formation. In this paper, we discuss the overall impact on multi-spectral image classification when the retinex image enhancement algorithm is used to preprocess multi-spectral images. The retinex is a multi-purpose image enhancement algorithm that performs dynamic range compression, reduces the dependence on lighting conditions, and generally enhances apparent spatial resolution. The retinex has been successfully applied to the enhancement of many different types of grayscale and color images. We show in this paper that retinex preprocessing improves the spatial structure of multi-spectral images and thus provides better within-class variations than would otherwise be obtained without the preprocessing. For a series of multi-spectral images obtained with diffuse and direct lighting, we show that without retinex preprocessing the class spectral signatures vary substantially with the lighting conditions. Whereas multi-dimensional clustering without preprocessing produced one-class homogeneous regions, the classification on the preprocessed images produced multi-class non-homogeneous regions. This lack of homogeneity is explained by the interaction between different agronomic treatments applied to the regions: the preprocessed images are closer to ground truth. The principle advantage that the retinex offers is that for different lighting conditions classifications derived from the retinex preprocessed images look remarkably "similar", and thus more consistent, whereas classifications derived from the original images, without preprocessing, are much less similar.
Schwartzkopf, Wade C; Bovik, Alan C; Evans, Brian L
2005-12-01
Traditional chromosome imaging has been limited to grayscale images, but recently a 5-fluorophore combinatorial labeling technique (M-FISH) was developed wherein each class of chromosomes binds with a different combination of fluorophores. This results in a multispectral image, where each class of chromosomes has distinct spectral components. In this paper, we develop new methods for automatic chromosome identification by exploiting the multispectral information in M-FISH chromosome images and by jointly performing chromosome segmentation and classification. We (1) develop a maximum-likelihood hypothesis test that uses multispectral information, together with conventional criteria, to select the best segmentation possibility; (2) use this likelihood function to combine chromosome segmentation and classification into a robust chromosome identification system; and (3) show that the proposed likelihood function can also be used as a reliable indicator of errors in segmentation, errors in classification, and chromosome anomalies, which can be indicators of radiation damage, cancer, and a wide variety of inherited diseases. We show that the proposed multispectral joint segmentation-classification method outperforms past grayscale segmentation methods when decomposing touching chromosomes. We also show that it outperforms past M-FISH classification techniques that do not use segmentation information.
Lossless, Multi-Spectral Data Compressor for Improved Compression for Pushbroom-Type Instruments
NASA Technical Reports Server (NTRS)
Klimesh, Matthew
2008-01-01
A low-complexity lossless algorithm for compression of multispectral data has been developed that takes into account pushbroom-type multispectral imagers properties in order to make the file compression more effective.
Detecting early stage pressure ulcer on dark skin using multispectral imager
NASA Astrophysics Data System (ADS)
Kong, Linghua; Sprigle, Stephen; Yi, Dingrong; Wang, Chao; Wang, Fengtao; Liu, Fuhan; Wang, Jiwu; Zhao, Futing
2009-10-01
This paper introduces a novel idea, innovative technology in building multi spectral imaging based device. The benefit from them is people can have low cost, handheld and standing alone device which makes acquire multi spectral images real time with just a snapshot. The paper for the first time publishes some images got from such prototyped miniaturized multi spectral imager.
Model-based recovery of histological parameters from multispectral images of the colon
NASA Astrophysics Data System (ADS)
Hidovic-Rowe, Dzena; Claridge, Ela
2005-04-01
Colon cancer alters the macroarchitecture of the colon tissue. Common changes include angiogenesis and the distortion of the tissue collagen matrix. Such changes affect the colon colouration. This paper presents the principles of a novel optical imaging method capable of extracting parameters depicting histological quantities of the colon. The method is based on a computational, physics-based model of light interaction with tissue. The colon structure is represented by three layers: mucosa, submucosa and muscle layer. Optical properties of the layers are defined by molar concentration and absorption coefficients of haemoglobins; the size and density of collagen fibres; the thickness of the layer and the refractive indexes of collagen and the medium. Using the entire histologically plausible ranges for these parameters, a cross-reference is created computationally between the histological quantities and the associated spectra. The output of the model was compared to experimental data acquired in vivo from 57 histologically confirmed normal and abnormal tissue samples and histological parameters were extracted. The model produced spectra which match well the measured data, with the corresponding spectral parameters being well within histologically plausible ranges. Parameters extracted for the abnormal spectra showed the increase in blood volume fraction and changes in collagen pattern characteristic of the colon cancer. The spectra extracted from multi-spectral images of ex-vivo colon including adenocarcinoma show the characteristic features associated with normal and abnormal colon tissue. These findings suggest that it should be possible to compute histological quantities for the colon from the multi-spectral images.
NASA Astrophysics Data System (ADS)
Manessa, Masita Dwi Mandini; Kanno, Ariyo; Sagawa, Tatsuyuki; Sekine, Masahiko; Nurdin, Nurjannah
2018-01-01
Lyzenga's multispectral bathymetry formula has attracted considerable interest due to its simplicity. However, there has been little discussion of the effect that variation in optical conditions and bottom types-which commonly appears in coral reef environments-has on this formula's results. The present paper evaluates Lyzenga's multispectral bathymetry formula for a variety of optical conditions and bottom types. A noiseless dataset of above-water remote sensing reflectance from WorldView-2 images over Case-1 shallow coral reef water is simulated using a radiative transfer model. The simulation-based assessment shows that Lyzenga's formula performs robustly, with adequate generality and good accuracy, under a range of conditions. As expected, the influence of bottom type on depth estimation accuracy is far greater than the influence of other optical parameters, i.e., chlorophyll-a concentration and solar zenith angle. Further, based on the simulation dataset, Lyzenga's formula estimates depth when the bottom type is unknown almost as accurately as when the bottom type is known. This study provides a better understanding of Lyzenga's multispectral bathymetry formula under various optical conditions and bottom types.
Multispectral Landsat images of Antartica
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucchitta, B.K.; Bowell, J.A.; Edwards, K.L.
1988-01-01
The U.S. Geological Survey has a program to map Antarctica by using colored, digitally enhanced Landsat multispectral scanner images to increase existing map coverage and to improve upon previously published Landsat maps. This report is a compilation of images and image mosaic that covers four complete and two partial 1:250,000-scale quadrangles of the McMurdo Sound region.
USDA-ARS?s Scientific Manuscript database
This research developed a multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet/blue LED excitation for detection of fecal contamination on Golden Delicious apples. Using a hyperspectral line-scan imaging system consisting of an EMCCD camera, spectrograph, an...
An Algorithm for Pedestrian Detection in Multispectral Image Sequences
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Fedorenko, V. V.
2017-05-01
The growing interest for self-driving cars provides a demand for scene understanding and obstacle detection algorithms. One of the most challenging problems in this field is the problem of pedestrian detection. Main difficulties arise from a diverse appearances of pedestrians. Poor visibility conditions such as fog and low light conditions also significantly decrease the quality of pedestrian detection. This paper presents a new optical flow based algorithm BipedDetet that provides robust pedestrian detection on a single-borad computer. The algorithm is based on the idea of simplified Kalman filtering suitable for realization on modern single-board computers. To detect a pedestrian a synthetic optical flow of the scene without pedestrians is generated using slanted-plane model. The estimate of a real optical flow is generated using a multispectral image sequence. The difference of the synthetic optical flow and the real optical flow provides the optical flow induced by pedestrians. The final detection of pedestrians is done by the segmentation of the difference of optical flows. To evaluate the BipedDetect algorithm a multispectral dataset was collected using a mobile robot.
NASA Technical Reports Server (NTRS)
Salu, Yehuda; Tilton, James
1993-01-01
The classification of multispectral image data obtained from satellites has become an important tool for generating ground cover maps. This study deals with the application of nonparametric pixel-by-pixel classification methods in the classification of pixels, based on their multispectral data. A new neural network, the Binary Diamond, is introduced, and its performance is compared with a nearest neighbor algorithm and a back-propagation network. The Binary Diamond is a multilayer, feed-forward neural network, which learns from examples in unsupervised, 'one-shot' mode. It recruits its neurons according to the actual training set, as it learns. The comparisons of the algorithms were done by using a realistic data base, consisting of approximately 90,000 Landsat 4 Thematic Mapper pixels. The Binary Diamond and the nearest neighbor performances were close, with some advantages to the Binary Diamond. The performance of the back-propagation network lagged behind. An efficient nearest neighbor algorithm, the binned nearest neighbor, is described. Ways for improving the performances, such as merging categories, and analyzing nonboundary pixels, are addressed and evaluated.
Radiometric sensitivity comparisons of multispectral imaging systems
NASA Technical Reports Server (NTRS)
Lu, Nadine C.; Slater, Philip N.
1989-01-01
Multispectral imaging systems provide much of the basic data used by the land and ocean civilian remote-sensing community. There are numerous multispectral imaging systems which have been and are being developed. A common way to compare the radiometric performance of these systems is to examine their noise-equivalent change in reflectance, NE Delta-rho. The NE Delta-rho of a system is the reflectance difference that is equal to the noise in the recorded signal. A comparison is made of the noise equivalent change in reflectance of seven different multispectral imaging systems (AVHRR, AVIRIS, ETM, HIRIS, MODIS-N, SPOT-1, HRV, and TM) for a set of three atmospheric conditions (continental aerosol with 23-km visibility, continental aerosol with 5-km visibility, and a Rayleigh atmosphere), five values of ground reflectance (0.01, 0.10, 0.25, 0.50, and 1.00), a nadir viewing angle, and a solar zenith angle of 45 deg.
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Sig-NganLam, Nina; Quattrochi, Dale A.
2004-01-01
The accuracy of traditional multispectral maximum-likelihood image classification is limited by the skewed statistical distributions of reflectances from the complex heterogenous mixture of land cover types in urban areas. This work examines the utility of local variance, fractal dimension and Moran's I index of spatial autocorrelation in segmenting multispectral satellite imagery. Tools available in the Image Characterization and Modeling System (ICAMS) were used to analyze Landsat 7 imagery of Atlanta, Georgia. Although segmentation of panchromatic images is possible using indicators of spatial complexity, different land covers often yield similar values of these indices. Better results are obtained when a surface of local fractal dimension or spatial autocorrelation is combined as an additional layer in a supervised maximum-likelihood multispectral classification. The addition of fractal dimension measures is particularly effective at resolving land cover classes within urbanized areas, as compared to per-pixel spectral classification techniques.
IMAGE 100: The interactive multispectral image processing system
NASA Technical Reports Server (NTRS)
Schaller, E. S.; Towles, R. W.
1975-01-01
The need for rapid, cost-effective extraction of useful information from vast quantities of multispectral imagery available from aircraft or spacecraft has resulted in the design, implementation and application of a state-of-the-art processing system known as IMAGE 100. Operating on the general principle that all objects or materials possess unique spectral characteristics or signatures, the system uses this signature uniqueness to identify similar features in an image by simultaneously analyzing signatures in multiple frequency bands. Pseudo-colors, or themes, are assigned to features having identical spectral characteristics. These themes are displayed on a color CRT, and may be recorded on tape, film, or other media. The system was designed to incorporate key features such as interactive operation, user-oriented displays and controls, and rapid-response machine processing. Owing to these features, the user can readily control and/or modify the analysis process based on his knowledge of the input imagery. Effective use can be made of conventional photographic interpretation skills and state-of-the-art machine analysis techniques in the extraction of useful information from multispectral imagery. This approach results in highly accurate multitheme classification of imagery in seconds or minutes rather than the hours often involved in processing using other means.
NASA Astrophysics Data System (ADS)
Heleno, S.; Matias, M.; Pina, P.; Sousa, A. J.
2015-09-01
A method for semi-automatic landslide detection, with the ability to separate source and run-out areas, is presented in this paper. It combines object-based image analysis and a Support Vector Machine classifier on a GeoEye-1 multispectral image, sensed 3 days after the major damaging landslide event that occurred in Madeira island (20 February 2010), with a pre-event LIDAR Digital Elevation Model. The testing is developed in a 15 km2-wide study area, where 95 % of the landslides scars are detected by this supervised approach. The classifier presents a good performance in the delineation of the overall landslide area. In addition, fair results are achieved in the separation of the source from the run-out landslide areas, although in less illuminated slopes this discrimination is less effective than in sunnier east facing-slopes.
USDA-ARS?s Scientific Manuscript database
Multispectral imaging algorithms were developed using visible-near-infrared (VNIR) and near-infrared (NIR) hyperspectral imaging (HSI) techniques to detect worms on fresh-cut lettuce. The optimal wavebands that detect worm on fresh-cut lettuce for each type of HSI were investigated using the one-way...
Development of fluorescence based handheld imaging devices for food safety inspection
NASA Astrophysics Data System (ADS)
Lee, Hoyoung; Kim, Moon S.; Chao, Kuanglin; Lefcourt, Alan M.; Chan, Diane E.
2013-05-01
For sanitation inspection in food processing environment, fluorescence imaging can be a very useful method because many organic materials reveal unique fluorescence emissions when excited by UV or violet radiation. Although some fluorescence-based automated inspection instrumentation has been developed for food products, there remains a need for devices that can assist on-site inspectors performing visual sanitation inspection of the surfaces of food processing/handling equipment. This paper reports the development of an inexpensive handheld imaging device designed to visualize fluorescence emissions and intended to help detect the presence of fecal contaminants, organic residues, and bacterial biofilms at multispectral fluorescence emission bands. The device consists of a miniature camera, multispectral (interference) filters, and high power LED illumination. With WiFi communication, live inspection images from the device can be displayed on smartphone or tablet devices. This imaging device could be a useful tool for assessing the effectiveness of sanitation procedures and for helping processors to minimize food safety risks or determine potential problem areas. This paper presents the design and development including evaluation and optimization of the hardware components of the imaging devices.
Liao, Jun; Wang, Zhe; Zhang, Zibang; Bian, Zichao; Guo, Kaikai; Nambiar, Aparna; Jiang, Yutong; Jiang, Shaowei; Zhong, Jingang; Choma, Michael; Zheng, Guoan
2018-02-01
We report the development of a multichannel microscopy for whole-slide multiplane, multispectral and phase imaging. We use trinocular heads to split the beam path into 6 independent channels and employ a camera array for parallel data acquisition, achieving a maximum data throughput of approximately 1 gigapixel per second. To perform single-frame rapid autofocusing, we place 2 near-infrared light-emitting diodes (LEDs) at the back focal plane of the condenser lens to illuminate the sample from 2 different incident angles. A hot mirror is used to direct the near-infrared light to an autofocusing camera. For multiplane whole-slide imaging (WSI), we acquire 6 different focal planes of a thick specimen simultaneously. For multispectral WSI, we relay the 6 independent image planes to the same focal position and simultaneously acquire information at 6 spectral bands. For whole-slide phase imaging, we acquire images at 3 focal positions simultaneously and use the transport-of-intensity equation to recover the phase information. We also provide an open-source design to further increase the number of channels from 6 to 15. The reported platform provides a simple solution for multiplexed fluorescence imaging and multimodal WSI. Acquiring an instant focal stack without z-scanning may also enable fast 3-dimensional dynamic tracking of various biological samples. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents.
Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam
2017-01-13
The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions.
Nonlinear Fusion of Multispectral Citrus Fruit Image Data with Information Contents
Li, Peilin; Lee, Sang-Heon; Hsu, Hung-Yao; Park, Jae-Sam
2017-01-01
The main issue of vison-based automatic harvesting manipulators is the difficulty in the correct fruit identification in the images under natural lighting conditions. Mostly, the solution has been based on a linear combination of color components in the multispectral images. However, the results have not reached a satisfactory level. To overcome this issue, this paper proposes a robust nonlinear fusion method to augment the original color image with the synchronized near infrared image. The two images are fused with Daubechies wavelet transform (DWT) in a multiscale decomposition approach. With DWT, the background noises are reduced and the necessary image features are enhanced by fusing the color contrast of the color components and the homogeneity of the near infrared (NIR) component. The resulting fused color image is classified with a C-means algorithm for reconstruction. The performance of the proposed approach is evaluated with the statistical F measure in comparison to some existing methods using linear combinations of color components. The results show that the fusion of information in different spectral components has the advantage of enhancing the image quality, therefore improving the classification accuracy in citrus fruit identification in natural lighting conditions. PMID:28098797
NASA Astrophysics Data System (ADS)
Shi, Aiye; Wang, Chao; Shen, Shaohong; Huang, Fengchen; Ma, Zhenli
2016-10-01
Chi-squared transform (CST), as a statistical method, can describe the difference degree between vectors. The CST-based methods operate directly on information stored in the difference image and are simple and effective methods for detecting changes in remotely sensed images that have been registered and aligned. However, the technique does not take spatial information into consideration, which leads to much noise in the result of change detection. An improved unsupervised change detection method is proposed based on spatial constraint CST (SCCST) in combination with a Markov random field (MRF) model. First, the mean and variance matrix of the difference image of bitemporal images are estimated by an iterative trimming method. In each iteration, spatial information is injected to reduce scattered changed points (also known as "salt and pepper" noise). To determine the key parameter confidence level in the SCCST method, a pseudotraining dataset is constructed to estimate the optimal value. Then, the result of SCCST, as an initial solution of change detection, is further improved by the MRF model. The experiments on simulated and real multitemporal and multispectral images indicate that the proposed method performs well in comprehensive indices compared with other methods.
Improving multispectral satellite image compression using onboard subpixel registration
NASA Astrophysics Data System (ADS)
Albinet, Mathieu; Camarero, Roberto; Isnard, Maxime; Poulet, Christophe; Perret, Jokin
2013-09-01
Future CNES earth observation missions will have to deal with an ever increasing telemetry data rate due to improvements in resolution and addition of spectral bands. Current CNES image compressors implement a discrete wavelet transform (DWT) followed by a bit plane encoding (BPE) but only on a mono spectral basis and do not profit from the multispectral redundancy of the observed scenes. Recent CNES studies have proven a substantial gain on the achievable compression ratio, +20% to +40% on selected scenarios, by implementing a multispectral compression scheme based on a Karhunen Loeve transform (KLT) followed by the classical DWT+BPE. But such results can be achieved only on perfectly registered bands; a default of registration as low as 0.5 pixel ruins all the benefits of multispectral compression. In this work, we first study the possibility to implement a multi-bands subpixel onboard registration based on registration grids generated on-the-fly by the satellite attitude control system and simplified resampling and interpolation techniques. Indeed bands registration is usually performed on ground using sophisticated techniques too computationally intensive for onboard use. This fully quantized algorithm is tuned to meet acceptable registration performances within stringent image quality criteria, with the objective of onboard real-time processing. In a second part, we describe a FPGA implementation developed to evaluate the design complexity and, by extrapolation, the data rate achievable on a spacequalified ASIC. Finally, we present the impact of this approach on the processing chain not only onboard but also on ground and the impacts on the design of the instrument.
Liu, Jinxia; Cao, Yue; Wang, Qiu; Pan, Wenjuan; Ma, Fei; Liu, Changhong; Chen, Wei; Yang, Jianbo; Zheng, Lei
2016-01-01
Water-injected beef has aroused public concern as a major food-safety issue in meat products. In the study, the potential of multispectral imaging analysis in the visible and near-infrared (405-970 nm) regions was evaluated for identifying water-injected beef. A multispectral vision system was used to acquire images of beef injected with up to 21% content of water, and partial least squares regression (PLSR) algorithm was employed to establish prediction model, leading to quantitative estimations of actual water increase with a correlation coefficient (r) of 0.923. Subsequently, an optimized model was achieved by integrating spectral data with feature information extracted from ordinary RGB data, yielding better predictions (r = 0.946). Moreover, the prediction equation was transferred to each pixel within the images for visualizing the distribution of actual water increase. These results demonstrate the capability of multispectral imaging technology as a rapid and non-destructive tool for the identification of water-injected beef. Copyright © 2015 Elsevier Ltd. All rights reserved.
Extended output phasor representation of multi-spectral fluorescence lifetime imaging microscopy
Campos-Delgado, Daniel U.; Navarro, O. Gutiérrez; Arce-Santana, E. R.; Jo, Javier A.
2015-01-01
In this paper, we investigate novel low-dimensional and model-free representations for multi-spectral fluorescence lifetime imaging microscopy (m-FLIM) data. We depart from the classical definition of the phasor in the complex plane to propose the extended output phasor (EOP) and extended phasor (EP) for multi-spectral information. The frequency domain properties of the EOP and EP are analytically studied based on a multiexponential model for the impulse response of the imaged tissue. For practical implementations, the EOP is more appealing since there is no need to perform deconvolution of the instrument response from the measured m-FLIM data, as in the case of EP. Our synthetic and experimental evaluations with m-FLIM datasets of human coronary atherosclerotic plaques show that low frequency indexes have to be employed for a distinctive representation of the EOP and EP, and to reduce noise distortion. The tissue classification of the m-FLIM datasets by EOP and EP also improves with low frequency indexes, and does not present significant differences by using either phasor. PMID:26114031
Sousa, Daniel; Small, Christopher
2018-02-14
Planned hyperspectral satellite missions and the decreased revisit time of multispectral imaging offer the potential for data fusion to leverage both the spectral resolution of hyperspectral sensors and the temporal resolution of multispectral constellations. Hyperspectral imagery can also be used to better understand fundamental properties of multispectral data. In this analysis, we use five flight lines from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) archive with coincident Landsat 8 acquisitions over a spectrally diverse region of California to address the following questions: (1) How much of the spectral dimensionality of hyperspectral data is captured in multispectral data?; (2) Is the characteristic pyramidal structure of the multispectral feature space also present in the low order dimensions of the hyperspectral feature space at comparable spatial scales?; (3) How much variability in rock and soil substrate endmembers (EMs) present in hyperspectral data is captured by multispectral sensors? We find nearly identical partitions of variance, low-order feature space topologies, and EM spectra for hyperspectral and multispectral image composites. The resulting feature spaces and EMs are also very similar to those from previous global multispectral analyses, implying that the fundamental structure of the global feature space is present in our relatively small spatial subset of California. Finally, we find that the multispectral dataset well represents the substrate EM variability present in the study area - despite its inability to resolve narrow band absorptions. We observe a tentative but consistent physical relationship between the gradation of substrate reflectance in the feature space and the gradation of sand versus clay content in the soil classification system.
Small, Christopher
2018-01-01
Planned hyperspectral satellite missions and the decreased revisit time of multispectral imaging offer the potential for data fusion to leverage both the spectral resolution of hyperspectral sensors and the temporal resolution of multispectral constellations. Hyperspectral imagery can also be used to better understand fundamental properties of multispectral data. In this analysis, we use five flight lines from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) archive with coincident Landsat 8 acquisitions over a spectrally diverse region of California to address the following questions: (1) How much of the spectral dimensionality of hyperspectral data is captured in multispectral data?; (2) Is the characteristic pyramidal structure of the multispectral feature space also present in the low order dimensions of the hyperspectral feature space at comparable spatial scales?; (3) How much variability in rock and soil substrate endmembers (EMs) present in hyperspectral data is captured by multispectral sensors? We find nearly identical partitions of variance, low-order feature space topologies, and EM spectra for hyperspectral and multispectral image composites. The resulting feature spaces and EMs are also very similar to those from previous global multispectral analyses, implying that the fundamental structure of the global feature space is present in our relatively small spatial subset of California. Finally, we find that the multispectral dataset well represents the substrate EM variability present in the study area – despite its inability to resolve narrow band absorptions. We observe a tentative but consistent physical relationship between the gradation of substrate reflectance in the feature space and the gradation of sand versus clay content in the soil classification system. PMID:29443900
NASA Astrophysics Data System (ADS)
Setiyoko, A.; Dharma, I. G. W. S.; Haryanto, T.
2017-01-01
Multispectral data and hyperspectral data acquired from satellite sensor have the ability in detecting various objects on the earth ranging from low scale to high scale modeling. These data are increasingly being used to produce geospatial information for rapid analysis by running feature extraction or classification process. Applying the most suited model for this data mining is still challenging because there are issues regarding accuracy and computational cost. This research aim is to develop a better understanding regarding object feature extraction and classification applied for satellite image by systematically reviewing related recent research projects. A method used in this research is based on PRISMA statement. After deriving important points from trusted sources, pixel based and texture-based feature extraction techniques are promising technique to be analyzed more in recent development of feature extraction and classification.
Image correlation and sampling study
NASA Technical Reports Server (NTRS)
Popp, D. J.; Mccormack, D. S.; Sedwick, J. L.
1972-01-01
The development of analytical approaches for solving image correlation and image sampling of multispectral data is discussed. Relevant multispectral image statistics which are applicable to image correlation and sampling are identified. The general image statistics include intensity mean, variance, amplitude histogram, power spectral density function, and autocorrelation function. The translation problem associated with digital image registration and the analytical means for comparing commonly used correlation techniques are considered. General expressions for determining the reconstruction error for specific image sampling strategies are developed.
Monitoring human melanocytic cell responses to piperine using multispectral imaging
NASA Astrophysics Data System (ADS)
Samatham, Ravikant; Phillips, Kevin G.; Sonka, Julia; Yelma, Aznegashe; Reddy, Neha; Vanka, Meenakshi; Thuillier, Philippe; Soumyanath, Amala; Jacques, Steven
2011-03-01
Vitiligo is a depigmentary disease characterized by melanocyte loss attributed most commonly to autoimmune mechanisms. Currently vitiligo has a high incidence (1% worldwide) but a poor set of treatment options. Piperine, a compound found in black pepper, is a potential treatment for the depigmentary skin disease vitiligo, due to its ability to stimulate mouse epidermal melanocyte proliferation in vitro and in vivo. The present study investigates the use of multispectral imaging and an image processing technique based on local contrast to quantify the stimulatory effects of piperine on human melanocyte proliferation in reconstructed epidermis. We demonstrate the ability of the imaging method to quantify increased pigmentation in response to piperine treatment. The quantization of melanocyte stimulation by the proposed imaging technique illustrates the potential use of this technology to quickly assess therapeutic responses of vitiligo tissue culture models to treatment non-invasively.
Multi-spectral imaging of oxygen saturation
NASA Astrophysics Data System (ADS)
Savelieva, Tatiana A.; Stratonnikov, Aleksander A.; Loschenov, Victor B.
2008-06-01
The system of multi-spectral imaging of oxygen saturation is an instrument that can record both spectral and spatial information about a sample. In this project, the spectral imaging technique is used for monitoring of oxygen saturation of hemoglobin in human tissues. This system can be used for monitoring spatial distribution of oxygen saturation in photodynamic therapy, surgery or sports medicine. Diffuse reflectance spectroscopy in the visible range is an effective and extensively used technique for the non-invasive study and characterization of various biological tissues. In this article, a short review of modeling techniques being currently in use for diffuse reflection from semi-infinite turbid media is presented. A simple and practical model for use with a real-time imaging system is proposed. This model is based on linear approximation of the dependence of the diffuse reflectance coefficient on relation between absorbance and reduced scattering coefficient. This dependence was obtained with the Monte Carlo simulation of photon propagation in turbid media. Spectra of the oxygenated and deoxygenated forms of hemoglobin differ mostly in the red area (520 - 600 nm) and have several characteristic points there. Thus four band-pass filters were used for multi-spectral imaging. After having measured the reflectance, the data obtained are used for fitting the concentration of oxygenated and free hemoglobin, and hemoglobin oxygen saturation.
COMPARISON OF RETINAL PATHOLOGY VISUALIZATION IN MULTISPECTRAL SCANNING LASER IMAGING.
Meshi, Amit; Lin, Tiezhu; Dans, Kunny; Chen, Kevin C; Amador, Manuel; Hasenstab, Kyle; Muftuoglu, Ilkay Kilic; Nudleman, Eric; Chao, Daniel; Bartsch, Dirk-Uwe; Freeman, William R
2018-03-16
To compare retinal pathology visualization in multispectral scanning laser ophthalmoscope imaging between the Spectralis and Optos devices. This retrospective cross-sectional study included 42 eyes from 30 patients with age-related macular degeneration (19 eyes), diabetic retinopathy (10 eyes), and epiretinal membrane (13 eyes). All patients underwent retinal imaging with a color fundus camera (broad-spectrum white light), the Spectralis HRA-2 system (3-color monochromatic lasers), and the Optos P200 system (2-color monochromatic lasers). The Optos image was cropped to a similar size as the Spectralis image. Seven masked graders marked retinal pathologies in each image within a 5 × 5 grid that included the macula. The average area with detected retinal pathology in all eyes was larger in the Spectralis images compared with Optos images (32.4% larger, P < 0.0001), mainly because of better visualization of epiretinal membrane and retinal hemorrhage. The average detection rate of age-related macular degeneration and diabetic retinopathy pathologies was similar across the three modalities, whereas epiretinal membrane detection rate was significantly higher in the Spectralis images. Spectralis tricolor multispectral scanning laser ophthalmoscope imaging had higher rate of pathology detection primarily because of better epiretinal membrane and retinal hemorrhage visualization compared with Optos bicolor multispectral scanning laser ophthalmoscope imaging.
Multispectral LiDAR Data for Land Cover Classification of Urban Areas
Morsy, Salem; Shaker, Ahmed; El-Rabbany, Ahmed
2017-01-01
Airborne Light Detection And Ranging (LiDAR) systems usually operate at a monochromatic wavelength measuring the range and the strength of the reflected energy (intensity) from objects. Recently, multispectral LiDAR sensors, which acquire data at different wavelengths, have emerged. This allows for recording of a diversity of spectral reflectance from objects. In this context, we aim to investigate the use of multispectral LiDAR data in land cover classification using two different techniques. The first is image-based classification, where intensity and height images are created from LiDAR points and then a maximum likelihood classifier is applied. The second is point-based classification, where ground filtering and Normalized Difference Vegetation Indices (NDVIs) computation are conducted. A dataset of an urban area located in Oshawa, Ontario, Canada, is classified into four classes: buildings, trees, roads and grass. An overall accuracy of up to 89.9% and 92.7% is achieved from image classification and 3D point classification, respectively. A radiometric correction model is also applied to the intensity data in order to remove the attenuation due to the system distortion and terrain height variation. The classification process is then repeated, and the results demonstrate that there are no significant improvements achieved in the overall accuracy. PMID:28445432
Multispectral LiDAR Data for Land Cover Classification of Urban Areas.
Morsy, Salem; Shaker, Ahmed; El-Rabbany, Ahmed
2017-04-26
Airborne Light Detection And Ranging (LiDAR) systems usually operate at a monochromatic wavelength measuring the range and the strength of the reflected energy (intensity) from objects. Recently, multispectral LiDAR sensors, which acquire data at different wavelengths, have emerged. This allows for recording of a diversity of spectral reflectance from objects. In this context, we aim to investigate the use of multispectral LiDAR data in land cover classification using two different techniques. The first is image-based classification, where intensity and height images are created from LiDAR points and then a maximum likelihood classifier is applied. The second is point-based classification, where ground filtering and Normalized Difference Vegetation Indices (NDVIs) computation are conducted. A dataset of an urban area located in Oshawa, Ontario, Canada, is classified into four classes: buildings, trees, roads and grass. An overall accuracy of up to 89.9% and 92.7% is achieved from image classification and 3D point classification, respectively. A radiometric correction model is also applied to the intensity data in order to remove the attenuation due to the system distortion and terrain height variation. The classification process is then repeated, and the results demonstrate that there are no significant improvements achieved in the overall accuracy.
Multispectral embedding-based deep neural network for three-dimensional human pose recovery
NASA Astrophysics Data System (ADS)
Yu, Jialin; Sun, Jifeng
2018-01-01
Monocular image-based three-dimensional (3-D) human pose recovery aims to retrieve 3-D poses using the corresponding two-dimensional image features. Therefore, the pose recovery performance highly depends on the image representations. We propose a multispectral embedding-based deep neural network (MSEDNN) to automatically obtain the most discriminative features from multiple deep convolutional neural networks and then embed their penultimate fully connected layers into a low-dimensional manifold. This compact manifold can explore not only the optimum output from multiple deep networks but also the complementary properties of them. Furthermore, the distribution of each hierarchy discriminative manifold is sufficiently smooth so that the training process of our MSEDNN can be effectively implemented only using few labeled data. Our proposed network contains a body joint detector and a human pose regressor that are jointly trained. Extensive experiments conducted on four databases show that our proposed MSEDNN can achieve the best recovery performance compared with the state-of-the-art methods.
A comparison of autonomous techniques for multispectral image analysis and classification
NASA Astrophysics Data System (ADS)
Valdiviezo-N., Juan C.; Urcid, Gonzalo; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso
2012-10-01
Multispectral imaging has given place to important applications related to classification and identification of objects from a scene. Because of multispectral instruments can be used to estimate the reflectance of materials in the scene, these techniques constitute fundamental tools for materials analysis and quality control. During the last years, a variety of algorithms has been developed to work with multispectral data, whose main purpose has been to perform the correct classification of the objects in the scene. The present study introduces a brief review of some classical as well as a novel technique that have been used for such purposes. The use of principal component analysis and K-means clustering techniques as important classification algorithms is here discussed. Moreover, a recent method based on the min-W and max-M lattice auto-associative memories, that was proposed for endmember determination in hyperspectral imagery, is introduced as a classification method. Besides a discussion of their mathematical foundation, we emphasize their main characteristics and the results achieved for two exemplar images conformed by objects similar in appearance, but spectrally different. The classification results state that the first components computed from principal component analysis can be used to highlight areas with different spectral characteristics. In addition, the use of lattice auto-associative memories provides good results for materials classification even in the cases where some spectral similarities appears in their spectral responses.
Lattice algebra approach to multispectral analysis of ancient documents.
Valdiviezo-N, Juan C; Urcid, Gonzalo
2013-02-01
This paper introduces a lattice algebra procedure that can be used for the multispectral analysis of historical documents and artworks. Assuming the presence of linearly mixed spectral pixels captured in a multispectral scene, the proposed method computes the scaled min- and max-lattice associative memories to determine the purest pixels that best represent the spectra of single pigments. The estimation of fractional proportions of pure spectra at each image pixel is used to build pigment abundance maps that can be used for subsequent restoration of damaged parts. Application examples include multispectral images acquired from the Archimedes Palimpsest and a Mexican pre-Hispanic codex.
Spatial clustering of pixels of a multispectral image
Conger, James Lynn
2014-08-19
A method and system for clustering the pixels of a multispectral image is provided. A clustering system computes a maximum spectral similarity score for each pixel that indicates the similarity between that pixel and the most similar neighboring. To determine the maximum similarity score for a pixel, the clustering system generates a similarity score between that pixel and each of its neighboring pixels and then selects the similarity score that represents the highest similarity as the maximum similarity score. The clustering system may apply a filtering criterion based on the maximum similarity score so that pixels with similarity scores below a minimum threshold are not clustered. The clustering system changes the current pixel values of the pixels in a cluster based on an averaging of the original pixel values of the pixels in the cluster.
The NEAR Multispectral Imager.
NASA Astrophysics Data System (ADS)
Hawkins, S. E., III
1998-06-01
Multispectral Imager, one of the primary instruments on the Near Earth Asteroid Rendezvous (NEAR) spacecraft, uses a five-element refractive optics telescope, an eight-position filter wheel, and a charge-coupled device detector to acquire images over its sensitive wavelength range of ≍400 - 1100 nm. The primary science objectives of the Multispectral Imager are to determine the morphology and composition of the surface of asteroid 433 Eros. The camera will have a critical role in navigating to the asteroid. Seven narrowband spectral filters have been selected to provide multicolor imaging for comparative studies with previous observations of asteroids in the same class as Eros. The eighth filter is broadband and will be used for optical navigation. An overview of the instrument is presented, and design parameters and tradeoffs are discussed.
Remote sensing fusion based on guided image filtering
NASA Astrophysics Data System (ADS)
Zhao, Wenfei; Dai, Qinling; Wang, Leiguang
2015-12-01
In this paper, we propose a novel remote sensing fusion approach based on guided image filtering. The fused images can well preserve the spectral features of the original multispectral (MS) images, meanwhile, enhance the spatial details information. Four quality assessment indexes are also introduced to evaluate the fusion effect when compared with other fusion methods. Experiments carried out on Gaofen-2, QuickBird, WorldView-2 and Landsat-8 images. And the results show an excellent performance of the proposed method.
NASA Astrophysics Data System (ADS)
Zabarylo, U.; Minet, O.
2010-01-01
Investigations on the application of optical procedures for the diagnosis of rheumatism using scattered light images are only at the beginning both in terms of new image-processing methods and subsequent clinical application. For semi-automatic diagnosis using laser light, the multispectral scattered light images are registered and overlapped to pseudo-coloured images, which depict diagnostically essential contents by visually highlighting pathological changes.
Multispectral image dissector camera flight test
NASA Technical Reports Server (NTRS)
Johnson, B. L.
1973-01-01
It was demonstrated that the multispectral image dissector camera is able to provide composite pictures of the earth surface from high altitude overflights. An electronic deflection feature was used to inject the gyro error signal into the camera for correction of aircraft motion.
NASA Astrophysics Data System (ADS)
Romano, Renan A.; Pratavieira, Sebastião.; da Silva, Ana P.; Kurachi, Cristina; Guimarães, Francisco E. G.
2017-07-01
This study clearly demonstrates that multispectral confocal microscopy images analyzed by artificial neural networks provides a powerful tool to real-time monitoring photosensitizer uptake, as well as photochemical transformations occurred.
Diagnosing hypoxia in murine models of rheumatoid arthritis from reflectance multispectral images
NASA Astrophysics Data System (ADS)
Glinton, Sophie; Naylor, Amy J.; Claridge, Ela
2017-07-01
Spectra computed from multispectral images of murine models of Rheumatoid Arthritis show a characteristic decrease in reflectance within the 600-800nm region which is indicative of the reduction in blood oxygenation and is consistent with hypoxia.
The application of UV multispectral technology in extract trace evdidence
NASA Astrophysics Data System (ADS)
Guo, Jingjing; Xu, Xiaojing; Li, Zhihui; Xu, Lei; Xie, Lanchi
2015-11-01
Multispectral imaging is becoming more and more important in the field of examination of material evidence, especially the ultraviolet spectral imaging. Fingerprints development, questioned document detection, trace evidence examination-all can used of it. This paper introduce a UV multispectral equipment which was developed by BITU & IFSC, it can extract trace evidence-extract fingerprints. The result showed that this technology can develop latent sweat-sebum mixed fingerprint on photo and ID card blood fingerprint on steel hold. We used the UV spectrum data analysis system to make the UV spectral image clear to identify and analyse.
Multispectral photography for earth resources
NASA Technical Reports Server (NTRS)
Wenderoth, S.; Yost, E.; Kalia, R.; Anderson, R.
1972-01-01
A guide for producing accurate multispectral results for earth resource applications is presented along with theoretical and analytical concepts of color and multispectral photography. Topics discussed include: capabilities and limitations of color and color infrared films; image color measurements; methods of relating ground phenomena to film density and color measurement; sensitometry; considerations in the selection of multispectral cameras and components; and mission planning.
Dabo-Niang, S; Zoueu, J T
2012-09-01
In this communication, we demonstrate how kriging, combine with multispectral and multimodal microscopy can enhance the resolution of malaria-infected images and provide more details on their composition, for analysis and diagnosis. The results of this interpolation applied to the two principal components of multispectral and multimodal images illustrate that the examination of the content of Plasmodium falciparum infected human erythrocyte is improved. © 2012 The Authors Journal of Microscopy © 2012 Royal Microscopical Society.
NASA Astrophysics Data System (ADS)
Xia, Wenfeng; West, Simeon J.; Nikitichev, Daniil I.; Ourselin, Sebastien; Beard, Paul C.; Desjardins, Adrien E.
2016-03-01
Accurate identification of tissue structures such as nerves and blood vessels is critically important for interventional procedures such as nerve blocks. Ultrasound imaging is widely used as a guidance modality to visualize anatomical structures in real-time. However, identification of nerves and small blood vessels can be very challenging, and accidental intra-neural or intra-vascular injections can result in significant complications. Multi-spectral photoacoustic imaging can provide high sensitivity and specificity for discriminating hemoglobin- and lipid-rich tissues. However, conventional surface-illumination-based photoacoustic systems suffer from limited sensitivity at large depths. In this study, for the first time, an interventional multispectral photoacoustic imaging (IMPA) system was used to image nerves in a swine model in vivo. Pulsed excitation light with wavelengths in the ranges of 750 - 900 nm and 1150 - 1300 nm was delivered inside the body through an optical fiber positioned within the cannula of an injection needle. Ultrasound waves were received at the tissue surface using a clinical linear array imaging probe. Co-registered B-mode ultrasound images were acquired using the same imaging probe. Nerve identification was performed using a combination of B-mode ultrasound imaging and electrical stimulation. Using a linear model, spectral-unmixing of the photoacoustic data was performed to provide image contrast for oxygenated and de-oxygenated hemoglobin, water and lipids. Good correspondence between a known nerve location and a lipid-rich region in the photoacoustic images was observed. The results indicate that IMPA is a promising modality for guiding nerve blocks and other interventional procedures. Challenges involved with clinical translation are discussed.
NASA Technical Reports Server (NTRS)
Harston, Craig; Schumacher, Chris
1992-01-01
Automated schemes are needed to classify multispectral remotely sensed data. Human intelligence is often required to correctly interpret images from satellites and aircraft. Humans suceed because they use various types of cues about a scene to accurately define the contents of the image. Consequently, it follows that computer techniques that integrate and use different types of information would perform better than single source approaches. This research illustrated that multispectral signatures and topographical information could be used in concert. Significantly, this dual source tactic classified a remotely sensed image better than the multispectral classification alone. These classifications were accomplished by fusing spectral signatures with topographical information using neural network technology. A neural network was trained to classify Landsat mulitspectral signatures. A file of georeferenced ground truth classifications were used as the training criterion. The network was trained to classify urban, agriculture, range, and forest with an accuracy of 65.7 percent. Another neural network was programmed and trained to fuse these multispectral signature results with a file of georeferenced altitude data. This topological file contained 10 levels of elevations. When this nonspectral elevation information was fused with the spectral signatures, the classifications were improved to 73.7 and 75.7 percent.
NASA Astrophysics Data System (ADS)
Maier, Oskar; Wilms, Matthias; von der Gablentz, Janina; Krämer, Ulrike; Handels, Heinz
2014-03-01
Automatic segmentation of ischemic stroke lesions in magnetic resonance (MR) images is important in clinical practice and for neuroscientific trials. The key problem is to detect largely inhomogeneous regions of varying sizes, shapes and locations. We present a stroke lesion segmentation method based on local features extracted from multi-spectral MR data that are selected to model a human observer's discrimination criteria. A support vector machine classifier is trained on expert-segmented examples and then used to classify formerly unseen images. Leave-one-out cross validation on eight datasets with lesions of varying appearances is performed, showing our method to compare favourably with other published approaches in terms of accuracy and robustness. Furthermore, we compare a number of feature selectors and closely examine each feature's and MR sequence's contribution.
NASA Astrophysics Data System (ADS)
Corucci, Linda; Masini, Andrea; Cococcioni, Marco
2011-01-01
This paper addresses bathymetry estimation from high resolution multispectral satellite images by proposing an accurate supervised method, based on a neuro-fuzzy approach. The method is applied to two Quickbird images of the same area, acquired in different years and meteorological conditions, and is validated using truth data. Performance is studied in different realistic situations of in situ data availability. The method allows to achieve a mean standard deviation of 36.7 cm for estimated water depths in the range [-18, -1] m. When only data collected along a closed path are used as a training set, a mean STD of 45 cm is obtained. The effect of both meteorological conditions and training set size reduction on the overall performance is also investigated.
NASA Astrophysics Data System (ADS)
Meyer, Hanna; Lehnert, Lukas W.; Wang, Yun; Reudenbach, Christoph; Nauss, Thomas; Bendix, Jörg
2016-04-01
Pastoralism is the dominant land-use on the Qinghai-Tibet-Plateau (QTP) providing the major economic resource for the local population. However, the pastures are highly supposed to be affected by ongoing degradation whose extent is still disputed. This study uses hyperspectral in situ measurements and multispectral satellite images to assess vegetation cover and above ground biomass (AGB) as proxies of pasture degradation on a regional scale. Using Random Forests in conjunction with recursive feature selection as modeling tool, it is tested whether the full hyperspectral information is needed or if multispectral information is sufficient to accurately estimate vegetation cover and AGB. To regionalize pasture degradation proxies, the transferability of the locally derived models to high resolution multispectral satellite data is assessed. For this purpose, 1183 hyperspectral measurements and vegetation records were sampled at 18 locations on the QTP. AGB was determined on 25 0.5x0.5m plots. Proxies for pasture degradation were derived from the spectra by calculating narrow-band indices (NBI). Using the NBI as predictor variables vegetation cover and AGB were modeled. Models were calculated using the hyperspectral data as well as the same data resampled to WorldView-2, QuickBird and RapidEye channels. The hyperspectral results were compared to the multispectral results. Finally, the models were applied to satellite data to map vegetation cover and AGB on a regional scale. Vegetation cover was accurately predicted by Random Forest if hyperspectral measurements were used. In contrast, errors in AGB estimations were considerably higher. Only small differences in accuracy were observed between the models based on hyper- compared to multispectral data. The application of the models to satellite images generally resulted in an increase of the estimation error. Though this reflects the challenge of applying in situ measurements to satellite data, the results still show a high potential to map pasture degradation proxies on the QTP even for larger scales.
Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri
2014-01-01
In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.
Multispectral imaging of plant stress for detection of CO2 leaking from underground
NASA Astrophysics Data System (ADS)
Rouse, J.; Shaw, J. A.; Repasky, K. S.; Lawrence, R. L.
2008-12-01
Multispectral imaging of plant stress is a potentially useful method of detecting CO2 leaking from underground. During the summers of 2007 and 2008, we deployed a multispectral imager for vegetation sensing as part of an underground CO2 release experiment conducted at the Zero Emission Research and Technology (ZERT) field site near the Montana State University campus in Bozeman, Montana. The imager was mounted on a low tower and observed the vegetation in a region near an underground pipe during a multi-week CO2 release. The imager was calibrated to measure absolute reflectance, from which vegetation indices were calculated as a measure of vegetation health. The temporal evolution of these indices over the course of the experiment show that the vegetation nearest the pipe exhibited more stress than the vegetation located further from the pipe. The imager observed notably increased stress in vegetation at locations exhibiting particularly high flux of CO2 from the ground into the atmosphere. These data from the 2007 and 2008 experiments will be used to demonstrate the utility of a tower-mounted multispectral imaging system for detecting CO2 leakage from below ground with the ability to operate continuously during clear and cloudy conditions.
Geometric Calibration and Radiometric Correction of the Maia Multispectral Camera
NASA Astrophysics Data System (ADS)
Nocerino, E.; Dubbini, M.; Menna, F.; Remondino, F.; Gattelli, M.; Covi, D.
2017-10-01
Multispectral imaging is a widely used remote sensing technique, whose applications range from agriculture to environmental monitoring, from food quality check to cultural heritage diagnostic. A variety of multispectral imaging sensors are available on the market, many of them designed to be mounted on different platform, especially small drones. This work focuses on the geometric and radiometric characterization of a brand-new, lightweight, low-cost multispectral camera, called MAIA. The MAIA camera is equipped with nine sensors, allowing for the acquisition of images in the visible and near infrared parts of the electromagnetic spectrum. Two versions are available, characterised by different set of band-pass filters, inspired by the sensors mounted on the WorlView-2 and Sentinel2 satellites, respectively. The camera details and the developed procedures for the geometric calibrations and radiometric correction are presented in the paper.
Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.
Park, Chulhee; Kang, Moon Gi
2016-05-18
A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.
Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition
Park, Chulhee; Kang, Moon Gi
2016-01-01
A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381
Development of online lines-scan imaging system for chicken inspection and differentiation
NASA Astrophysics Data System (ADS)
Yang, Chun-Chieh; Chan, Diane E.; Chao, Kuanglin; Chen, Yud-Ren; Kim, Moon S.
2006-10-01
An online line-scan imaging system was developed for differentiation of wholesome and systemically diseased chickens. The hyperspectral imaging system used in this research can be directly converted to multispectral operation and would provide the ideal implementation of essential features for data-efficient high-speed multispectral classification algorithms. The imaging system consisted of an electron-multiplying charge-coupled-device (EMCCD) camera and an imaging spectrograph for line-scan images. The system scanned the surfaces of chicken carcasses on an eviscerating line at a poultry processing plant in December 2005. A method was created to recognize birds entering and exiting the field of view, and to locate a Region of Interest on the chicken images from which useful spectra were extracted for analysis. From analysis of the difference spectra between wholesome and systemically diseased chickens, four wavelengths of 468 nm, 501 nm, 582 nm and 629 nm were selected as key wavelengths for differentiation. The method of locating the Region of Interest will also have practical application in multispectral operation of the line-scan imaging system for online chicken inspection. This line-scan imaging system makes possible the implementation of multispectral inspection using the key wavelengths determined in this study with minimal software adaptations and without the need for cross-system calibration.
Resolution Enhancement of Hyperion Hyperspectral Data using Ikonos Multispectral Data
2007-09-01
spatial - resolution hyperspectral image to produce a sharpened product. The result is a product that has the spectral properties of the ...multispectral sensors. In this work, we examine the benefits of combining data from high- spatial - resolution , low- spectral - resolution spectral imaging...sensors with data obtained from high- spectral - resolution , low- spatial - resolution spectral imaging sensors.
NASA Technical Reports Server (NTRS)
1998-01-01
Under a Jet Propulsion Laboratory SBIR (Small Business Innovative Research), Cambridge Research and Instrumentation Inc., developed a new class of filters for the construction of small, low-cost multispectral imagers. The VariSpec liquid crystal enables users to obtain multi-spectral, ultra-high resolution images using a monochrome CCD (charge coupled device) camera. Application areas include biomedical imaging, remote sensing, and machine vision.
Fingerprint recognition of alien invasive weeds based on the texture character and machine learning
NASA Astrophysics Data System (ADS)
Yu, Jia-Jia; Li, Xiao-Li; He, Yong; Xu, Zheng-Hao
2008-11-01
Multi-spectral imaging technique based on texture analysis and machine learning was proposed to discriminate alien invasive weeds with similar outline but different categories. The objectives of this study were to investigate the feasibility of using Multi-spectral imaging, especially the near-infrared (NIR) channel (800 nm+/-10 nm) to find the weeds' fingerprints, and validate the performance with specific eigenvalues by co-occurrence matrix. Veronica polita Pries, Veronica persica Poir, longtube ground ivy, Laminum amplexicaule Linn. were selected in this study, which perform different effect in field, and are alien invasive species in China. 307 weed leaves' images were randomly selected for the calibration set, while the remaining 207 samples for the prediction set. All images were pretreated by Wallis filter to adjust the noise by uneven lighting. Gray level co-occurrence matrix was applied to extract the texture character, which shows density, randomness correlation, contrast and homogeneity of texture with different algorithms. Three channels (green channel by 550 nm+/-10 nm, red channel by 650 nm+/-10 nm and NIR channel by 800 nm+/-10 nm) were respectively calculated to get the eigenvalues.Least-squares support vector machines (LS-SVM) was applied to discriminate the categories of weeds by the eigenvalues from co-occurrence matrix. Finally, recognition ratio of 83.35% by NIR channel was obtained, better than the results by green channel (76.67%) and red channel (69.46%). The prediction results of 81.35% indicated that the selected eigenvalues reflected the main characteristics of weeds' fingerprint based on multi-spectral (especially by NIR channel) and LS-SVM model.
NASA Astrophysics Data System (ADS)
Dougakiuchi, Tatsuo; Kawada, Yoichi; Takebe, Gen
2018-03-01
We demonstrate the continuous multispectral imaging of surface phonon polaritons (SPhPs) on silicon carbide excited by an external cavity quantum cascade laser using scattering-type scanning near-field optical microscopy. The launched SPhPs were well characterized via the confirmation that the theoretical dispersion relation and measured in-plane wave vectors are in excellent agreement in the entire measurement range. The proposed scheme, which can excite and observe SPhPs with an arbitrary wavelength that effectively covers the spectral gap of CO2 lasers, is expected to be applicable for studies of near-field optics and for various applications based on SPhPs.
NASA Astrophysics Data System (ADS)
Renaud, Rémi; Bendahmane, Mounir; Chery, Romain; Martin, Claire; Gurden, Hirac; Pain, Frederic
2012-06-01
Wide field multispectral imaging of light backscattered by brain tissues provides maps of hemodynamics changes (total blood volume and oxygenation) following activation. This technique relies on the fit of the reflectance images obtain at two or more wavelengths using a modified Beer-Lambert law1,2. It has been successfully applied to study the activation of several sensory cortices in the anesthetized rodent using visible light1-5. We have carried out recently the first multispectral imaging in the olfactory bulb6 (OB) of anesthetized rats. However, the optimization of wavelengths choice has not been discussed in terms of cross talk and uniqueness of the estimated parameters (blood volume and saturation maps) although this point was shown to be crucial for similar studies in Diffuse Optical Imaging in humans7-10. We have studied theoretically and experimentally the optimal sets of wavelength for multispectral imaging of rodent brain activation in the visible. Sets of optimal wavelengths have been identified and validated in vivo for multispectral imaging of the OB of rats following odor stimulus. We studied the influence of the wavelengths sets on the magnitude and time courses of the oxy- and deoxyhemoglobin concentration variations as well as on the spatial extent of activated brain areas following stimulation. Beyond the estimation of hemodynamic parameters from multispectral reflectance data, we observed repeatedly and for all wavelengths a decrease of light reflectance. For wavelengths longer than 590 nm, these observations differ from those observed in the somatosensory and barrel cortex and question the basis of the reflectance changes during activation in the OB. To solve this issue, Monte Carlo simulations (MCS) have been carried out to assess the relative contribution of absorption, scattering and anisotropy changes to the intrinsic optical imaging signals in somatosensory cortex (SsC) and OB model.
NASA Astrophysics Data System (ADS)
Yu, Shanshan; Murakami, Yuri; Obi, Takashi; Yamaguchi, Masahiro; Ohyama, Nagaaki
2006-09-01
The article proposes a multispectral image compression scheme using nonlinear spectral transform for better colorimetric and spectral reproducibility. In the method, we show the reduction of colorimetric error under a defined viewing illuminant and also that spectral accuracy can be improved simultaneously using a nonlinear spectral transform called Labplus, which takes into account the nonlinearity of human color vision. Moreover, we show that the addition of diagonal matrices to Labplus can further preserve the spectral accuracy and has a generalized effect of improving the colorimetric accuracy under other viewing illuminants than the defined one. Finally, we discuss the usage of the first-order Markov model to form the analysis vectors for the higher order channels in Labplus to reduce the computational complexity. We implement a multispectral image compression system that integrates Labplus with JPEG2000 for high colorimetric and spectral reproducibility. Experimental results for a 16-band multispectral image show the effectiveness of the proposed scheme.
The Panoramic Camera (PanCam) Instrument for the ESA ExoMars Rover
NASA Astrophysics Data System (ADS)
Griffiths, A.; Coates, A.; Jaumann, R.; Michaelis, H.; Paar, G.; Barnes, D.; Josset, J.
The recently approved ExoMars rover is the first element of the ESA Aurora programme and is slated to deliver the Pasteur exobiology payload to Mars by 2013. The 0.7 kg Panoramic Camera will provide multispectral stereo images with 65° field-of- view (1.1 mrad/pixel) and high resolution (85 µrad/pixel) monoscopic "zoom" images with 5° field-of-view. The stereo Wide Angle Cameras (WAC) are based on Beagle 2 Stereo Camera System heritage. The Panoramic Camera instrument is designed to fulfil the digital terrain mapping requirements of the mission as well as providing multispectral geological imaging, colour and stereo panoramic images, solar images for water vapour abundance and dust optical depth measurements and to observe retrieved subsurface samples before ingestion into the rest of the Pasteur payload. Additionally the High Resolution Camera (HRC) can be used for high resolution imaging of interesting targets detected in the WAC panoramas and of inaccessible locations on crater or valley walls.
Pansharpening Techniques to Detect Mass Monument Damaging in Iraq
NASA Astrophysics Data System (ADS)
Baiocchi, V.; Bianchi, A.; Maddaluno, C.; Vidale, M.
2017-05-01
The recent mass destructions of monuments in Iraq cannot be monitored with the terrestrial survey methodologies, for obvious reasons of safety. For the same reasons, it's not advisable the use of classical aerial photogrammetry, so it was obvious to think to the use of multispectral Very High Resolution (VHR) satellite imagery. Nowadays VHR satellite images resolutions are very near airborne photogrammetrical images and usually they are acquired in multispectral mode. The combination of the various bands of the images is called pan-sharpening and it can be carried on using different algorithms and strategies. The correct pansharpening methodology, for a specific image, must be chosen considering the specific multispectral characteristics of the satellite used and the particular application. In this paper a first definition of guidelines for the use of VHR multispectral imagery to detect monument destruction in unsafe area, is reported. The proposed methodology, agreed with UNESCO and soon to be used in Libya for the coastal area, has produced a first report delivered to the Iraqi authorities. Some of the most evident examples are reported to show the possible capabilities of identification of damages using VHR images.
Liu, Bo; Zhang, Lifu; Zhang, Xia; Zhang, Bing; Tong, Qingxi
2009-01-01
Data simulation is widely used in remote sensing to produce imagery for a new sensor in the design stage, for scale issues of some special applications, or for testing of novel algorithms. Hyperspectral data could provide more abundant information than traditional multispectral data and thus greatly extend the range of remote sensing applications. Unfortunately, hyperspectral data are much more difficult and expensive to acquire and were not available prior to the development of operational hyperspectral instruments, while large amounts of accumulated multispectral data have been collected around the world over the past several decades. Therefore, it is reasonable to examine means of using these multispectral data to simulate or construct hyperspectral data, especially in situations where hyperspectral data are necessary but hard to acquire. Here, a method based on spectral reconstruction is proposed to simulate hyperspectral data (Hyperion data) from multispectral Advanced Land Imager data (ALI data). This method involves extraction of the inherent information of source data and reassignment to newly simulated data. A total of 106 bands of Hyperion data were simulated from ALI data covering the same area. To evaluate this method, we compare the simulated and original Hyperion data by visual interpretation, statistical comparison, and classification. The results generally showed good performance of this method and indicated that most bands were well simulated, and the information both preserved and presented well. This makes it possible to simulate hyperspectral data from multispectral data for testing the performance of algorithms, extend the use of multispectral data and help the design of a virtual sensor. PMID:22574064
Mitigating fluorescence spectral overlap in wide-field endoscopic imaging
Hou, Vivian; Nelson, Leonard Y.; Seibel, Eric J.
2013-01-01
Abstract. The number of molecular species suitable for multispectral fluorescence imaging is limited due to the overlap of the emission spectra of indicator fluorophores, e.g., dyes and nanoparticles. To remove fluorophore emission cross-talk in wide-field multispectral fluorescence molecular imaging, we evaluate three different solutions: (1) image stitching, (2) concurrent imaging with cross-talk ratio subtraction algorithm, and (3) frame-sequential imaging. A phantom with fluorophore emission cross-talk is fabricated, and a 1.2-mm ultrathin scanning fiber endoscope (SFE) is used to test and compare these approaches. Results show that fluorophore emission cross-talk could be successfully avoided or significantly reduced. Near term, the concurrent imaging method of wide-field multispectral fluorescence SFE is viable for early stage cancer detection and localization in vivo. Furthermore, a means to enhance exogenous fluorescence target-to-background ratio by the reduction of tissue autofluorescence background is demonstrated. PMID:23966226
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Fuell, Kevin K.; LaFontaine, Frank; McGrath, Kevin; Smith, Matt
2013-01-01
Current and future satellite sensors provide remotely sensed quantities from a variety of wavelengths ranging from the visible to the passive microwave, from both geostationary and low ]Earth orbits. The NASA Short ]term Prediction Research and Transition (SPoRT) Center has a long history of providing multispectral imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard NASA fs Terra and Aqua satellites in support of NWS forecast office activities. Products from MODIS have recently been extended to include a broader suite of multispectral imagery similar to those developed by EUMETSAT, based upon the spectral channels available from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) aboard METEOSAT ]9. This broader suite includes products that discriminate between air mass types associated with synoptic ]scale features, assists in the identification of dust, and improves upon paired channel difference detection of fog and low cloud events. Future instruments will continue the availability of these products and also expand upon current capabilities. The Advanced Baseline Imager (ABI) on GOES ]R will improve the spectral, spatial, and temporal resolution of our current geostationary capabilities, and the recent launch of the Suomi National Polar ]Orbiting Partnership (S ]NPP) carries instruments such as the Visible Infrared Imager Radiometer Suite (VIIRS), the Cross ]track Infrared Sounder (CrIS), and the Advanced Technology Microwave Sounder (ATMS), which have unrivaled spectral and spatial resolution, as precursors to the JPSS era (i.e., the next generation of polar orbiting satellites. New applications from VIIRS extend multispectral composites available from MODIS and SEVIRI while adding new capabilities through incorporation of additional CrIS channels or information from the Near Constant Contrast or gDay ]Night Band h, which provides moonlit reflectance from clouds and detection of fires or city lights. This presentation will present a review of SPoRT, CIRA, and NRL collaborations regarding multispectral satellite imagery and recent applications within the operational forecasting environment
NASA Astrophysics Data System (ADS)
Minet, Olaf; Scheibe, Patrick; Beuthan, Jürgen; Zabarylo, Urszula
2009-10-01
State-of-the-art image processing methods offer new possibilities for diagnosing diseases using scattered light. The optical diagnosis of rheumatism is taken as an example to show that the diagnostic sensitivity can be improved using overlapped pseudo-coloured images of different wavelengths, provided that multispectral images are recorded to compensate for any motion related artefacts which occur during examination.
NASA Astrophysics Data System (ADS)
Minet, Olaf; Scheibe, Patrick; Beuthan, Jürgen; Zabarylo, Urszula
2010-02-01
State-of-the-art image processing methods offer new possibilities for diagnosing diseases using scattered light. The optical diagnosis of rheumatism is taken as an example to show that the diagnostic sensitivity can be improved using overlapped pseudo-coloured images of different wavelengths, provided that multispectral images are recorded to compensate for any motion related artefacts which occur during examination.
NASA Technical Reports Server (NTRS)
1982-01-01
The state-of-the-art of multispectral sensing is reviewed and recommendations for future research and development are proposed. specifically, two generic sensor concepts were discussed. One is the multispectral pushbroom sensor utilizing linear array technology which operates in six spectral bands including two in the SWIR region and incorporates capabilities for stereo and crosstrack pointing. The second concept is the imaging spectrometer (IS) which incorporates a dispersive element and area arrays to provide both spectral and spatial information simultaneously. Other key technology areas included very large scale integration and the computer aided design of these devices.
NASA Technical Reports Server (NTRS)
Bell, J. F., III; Arneson, H. M.; Farrand, W. H.; Goetz, W.; Hayes, A. G.; Herkenhoff, K.; Johnson, M. J.; Johnson, J. R.; Joseph, J.; Kinch, K.
2005-01-01
Introduction. The panoramic camera (Pancam) multispectral, stereoscopic imaging systems on the Mars Exploration Rovers Spirit and Opportunity [1] have acquired and downlinked more than 45,000 images (35 Gbits of data) over more than 700 combined sols of operation on Mars as of early January 2005. A large subset of these images were acquired as part of 26 large multispectral and/or broadband "albedo" panoramas (15 on Spirit, 11 on Opportunity) covering large ranges of azimuth (12 spanning 360 ) and designed to characterize major regional color and albedo characteristics of the landing sites and various points along both rover traverses.
River velocities from sequential multispectral remote sensing images
NASA Astrophysics Data System (ADS)
Chen, Wei; Mied, Richard P.
2013-06-01
We address the problem of extracting surface velocities from a pair of multispectral remote sensing images over rivers using a new nonlinear multiple-tracer form of the global optimal solution (GOS). The derived velocity field is a valid solution across the image domain to the nonlinear system of equations obtained by minimizing a cost function inferred from the conservation constraint equations for multiple tracers. This is done by deriving an iteration equation for the velocity, based on the multiple-tracer displaced frame difference equations, and a local approximation to the velocity field. The number of velocity equations is greater than the number of velocity components, and thus overly constrain the solution. The iterative technique uses Gauss-Newton and Levenberg-Marquardt methods and our own algorithm of the progressive relaxation of the over-constraint. We demonstrate the nonlinear multiple-tracer GOS technique with sequential multispectral Landsat and ASTER images over a portion of the Potomac River in MD/VA, and derive a dense field of accurate velocity vectors. We compare the GOS river velocities with those from over 12 years of data at four NOAA reference stations, and find good agreement. We discuss how to find the appropriate spatial and temporal resolutions to allow optimization of the technique for specific rivers.
Optimal optical filters of fluorescence excitation and emission for poultry fecal detection
USDA-ARS?s Scientific Manuscript database
Purpose: An analytic method to design excitation and emission filters of a multispectral fluorescence imaging system is proposed and was demonstrated in an application to poultry fecal inspection. Methods: A mathematical model of a multispectral imaging system is proposed and its system parameters, ...
Leica ADS40 Sensor for Coastal Multispectral Imaging
NASA Technical Reports Server (NTRS)
Craig, John C.
2007-01-01
The Leica ADS40 Sensor as it is used for coastal multispectral imaging is presented. The contents include: 1) Project Area Overview; 2) Leica ADS40 Sensor; 3) Focal Plate Arrangements; 4) Trichroid Filter; 5) Gradient Correction; 6) Image Acquisition; 7) Remote Sensing and ADS40; 8) Band comparisons of Satellite and Airborne Sensors; 9) Impervious Surface Extraction; and 10) Impervious Surface Details.
Multiplex Quantitative Histologic Analysis of Human Breast Cancer Cell Signaling and Cell Fate
2010-05-01
Breast cancer, cell signaling, cell proliferation, histology, image analysis 15. NUMBER OF PAGES - 51 16. PRICE CODE 17. SECURITY CLASSIFICATION...revealed by individual stains in multiplex combinations; and (3) software (FARSIGHT) for automated multispectral image analysis that (i) segments...Task 3. Develop computational algorithms for multispectral immunohistological image analysis FARSIGHT software was developed to quantify intrinsic
A Comparative Study of Land Cover Classification by Using Multispectral and Texture Data
Qadri, Salman; Khan, Dost Muhammad; Ahmad, Farooq; Qadri, Syed Furqan; Babar, Masroor Ellahi; Shahid, Muhammad; Ul-Rehman, Muzammil; Razzaq, Abdul; Shah Muhammad, Syed; Fahad, Muhammad; Ahmad, Sarfraz; Pervez, Muhammad Tariq; Naveed, Nasir; Aslam, Naeem; Jamil, Mutiullah; Rehmani, Ejaz Ahmad; Ahmad, Nazir; Akhtar Khan, Naeem
2016-01-01
The main objective of this study is to find out the importance of machine vision approach for the classification of five types of land cover data such as bare land, desert rangeland, green pasture, fertile cultivated land, and Sutlej river land. A novel spectra-statistical framework is designed to classify the subjective land cover data types accurately. Multispectral data of these land covers were acquired by using a handheld device named multispectral radiometer in the form of five spectral bands (blue, green, red, near infrared, and shortwave infrared) while texture data were acquired with a digital camera by the transformation of acquired images into 229 texture features for each image. The most discriminant 30 features of each image were obtained by integrating the three statistical features selection techniques such as Fisher, Probability of Error plus Average Correlation, and Mutual Information (F + PA + MI). Selected texture data clustering was verified by nonlinear discriminant analysis while linear discriminant analysis approach was applied for multispectral data. For classification, the texture and multispectral data were deployed to artificial neural network (ANN: n-class). By implementing a cross validation method (80-20), we received an accuracy of 91.332% for texture data and 96.40% for multispectral data, respectively. PMID:27376088
Multispectral imaging determination of pigment concentration profiles in meat
NASA Astrophysics Data System (ADS)
Sáenz Gamasa, Carlos; Hernández Salueña, Begoña; Alberdi Odriozola, Coro; Alfonso Ábrego, Santiago; Berrogui Arizu, Miguel; Diñeiro Rubial, José Manuel
2006-01-01
The possibility of using multispectral techniques to determine the concentration profiles of myoglobin derivatives as a function of the distance to the meat surface during meat oxygenation is demonstrated. Reduced myoglobin (Mb) oxygenated oxymyoglobin (MbO II) and oxidized Metmyoglobin (MMb) concentration profiles are determined with a spatial resolutions better than of 0.01235 mm/pixel. Pigment concentrations are calculated using (K/S) ratios at isobestic points (474, 525, 572 and 610 nm) of the three forms of myoglobin pigments. This technique greatly improves previous methods, based on visual determination of pigment layers by their color, which allowed only estimations of pigment layer position and width. The multispectral technique avoids observer and illumination related bias in the pigment layer determination.
Investigation related to multispectral imaging systems
NASA Technical Reports Server (NTRS)
Nalepka, R. F.; Erickson, J. D.
1974-01-01
A summary of technical progress made during a five year research program directed toward the development of operational information systems based on multispectral sensing and the use of these systems in earth-resource survey applications is presented. Efforts were undertaken during this program to: (1) improve the basic understanding of the many facets of multispectral remote sensing, (2) develop methods for improving the accuracy of information generated by remote sensing systems, (3) improve the efficiency of data processing and information extraction techniques to enhance the cost-effectiveness of remote sensing systems, (4) investigate additional problems having potential remote sensing solutions, and (5) apply the existing and developing technology for specific users and document and transfer that technology to the remote sensing community.
Multispectral imaging reveals biblical-period inscription unnoticed for half a century
Cordonsky, Michael; Levin, David; Moinester, Murray; Sass, Benjamin; Turkel, Eli; Piasetzky, Eli; Finkelstein, Israel
2017-01-01
Most surviving biblical period Hebrew inscriptions are ostraca—ink-on-clay texts. They are poorly preserved and once unearthed, fade rapidly. Therefore, proper and timely documentation of ostraca is essential. Here we show a striking example of a hitherto invisible text on the back side of an ostracon revealed via multispectral imaging. This ostracon, found at the desert fortress of Arad and dated to ca. 600 BCE (the eve of Judah’s destruction by Nebuchadnezzar), has been on display for half a century. Its front side has been thoroughly studied, while its back side was considered blank. Our research revealed three lines of text on the supposedly blank side and four "new" lines on the front side. Our results demonstrate the need for multispectral image acquisition for both sides of all ancient ink ostraca. Moreover, in certain cases we recommend employing multispectral techniques for screening newly unearthed ceramic potsherds prior to disposal. PMID:28614416
Multispectral imaging reveals biblical-period inscription unnoticed for half a century.
Faigenbaum-Golovin, Shira; Mendel-Geberovich, Anat; Shaus, Arie; Sober, Barak; Cordonsky, Michael; Levin, David; Moinester, Murray; Sass, Benjamin; Turkel, Eli; Piasetzky, Eli; Finkelstein, Israel
2017-01-01
Most surviving biblical period Hebrew inscriptions are ostraca-ink-on-clay texts. They are poorly preserved and once unearthed, fade rapidly. Therefore, proper and timely documentation of ostraca is essential. Here we show a striking example of a hitherto invisible text on the back side of an ostracon revealed via multispectral imaging. This ostracon, found at the desert fortress of Arad and dated to ca. 600 BCE (the eve of Judah's destruction by Nebuchadnezzar), has been on display for half a century. Its front side has been thoroughly studied, while its back side was considered blank. Our research revealed three lines of text on the supposedly blank side and four "new" lines on the front side. Our results demonstrate the need for multispectral image acquisition for both sides of all ancient ink ostraca. Moreover, in certain cases we recommend employing multispectral techniques for screening newly unearthed ceramic potsherds prior to disposal.
NASA Astrophysics Data System (ADS)
Lo, Mei-Chun; Hsieh, Tsung-Hsien; Perng, Ruey-Kuen; Chen, Jiong-Qiao
2010-01-01
The aim of this research is to derive illuminant-independent type of HDR imaging modules which can optimally multispectrally reconstruct of every color concerned in high-dynamic-range of original images for preferable cross-media color reproduction applications. Each module, based on either of broadband and multispectral approach, would be incorporated models of perceptual HDR tone-mapping, device characterization. In this study, an xvYCC format of HDR digital camera was used to capture HDR scene images for test. A tone-mapping module was derived based on a multiscale representation of the human visual system and used equations similar to a photoreceptor adaptation equation, proposed by Michaelis-Menten. Additionally, an adaptive bilateral type of gamut mapping algorithm, using approach of a multiple conversing-points (previously derived), was incorporated with or without adaptive Un-sharp Masking (USM) to carry out the optimization of HDR image rendering. An LCD with standard color space of Adobe RGB (D65) was used as a soft-proofing platform to display/represent HDR original RGB images, and also evaluate both renditionquality and prediction-performance of modules derived. Also, another LCD with standard color space of sRGB was used to test gamut-mapping algorithms, used to be integrated with tone-mapping module derived.
Multispectral Image Enhancement Through Adaptive Wavelet Fusion
2016-09-14
13. SUPPLEMENTARY NOTES 14. ABSTRACT This research developed a multiresolution image fusion scheme based on guided filtering . Guided filtering can...effectively reduce noise while preserving detail boundaries. When applied in an iterative mode, guided filtering selectively eliminates small scale...details while restoring larger scale edges. The proposed multi-scale image fusion scheme achieves spatial consistency by using guided filtering both at
Analysis of Cultural Heritage by Accelerator Techniques and Analytical Imaging
NASA Astrophysics Data System (ADS)
Ide-Ektessabi, Ari; Toque, Jay Arre; Murayama, Yusuke
2011-12-01
In this paper we present the result of experimental investigation using two very important accelerator techniques: (1) synchrotron radiation XRF and XAFS; and (2) accelerator mass spectrometry and multispectral analytical imaging for the investigation of cultural heritage. We also want to introduce a complementary approach to the investigation of artworks which is noninvasive and nondestructive that can be applied in situ. Four major projects will be discussed to illustrate the potential applications of these accelerator and analytical imaging techniques: (1) investigation of Mongolian Textile (Genghis Khan and Kublai Khan Period) using XRF, AMS and electron microscopy; (2) XRF studies of pigments collected from Korean Buddhist paintings; (3) creating a database of elemental composition and spectral reflectance of more than 1000 Japanese pigments which have been used for traditional Japanese paintings; and (4) visible light-near infrared spectroscopy and multispectral imaging of degraded malachite and azurite. The XRF measurements of the Japanese and Korean pigments could be used to complement the results of pigment identification by analytical imaging through spectral reflectance reconstruction. On the other hand, analysis of the Mongolian textiles revealed that they were produced between 12th and 13th century. Elemental analysis of the samples showed that they contained traces of gold, copper, iron and titanium. Based on the age and trace elements in the samples, it was concluded that the textiles were produced during the height of power of the Mongol empire, which makes them a valuable cultural heritage. Finally, the analysis of the degraded and discolored malachite and azurite demonstrates how multispectral analytical imaging could be used to complement the results of high energy-based techniques.
The Effectiveness of Hydrothermal Alteration Mapping based on Hyperspectral Data in Tropical Region
NASA Astrophysics Data System (ADS)
Muhammad, R. R. D.; Saepuloh, A.
2016-09-01
Hyperspectral remote sensing could be used to characterize targets at earth's surface based on their spectra. This capability is useful for mapping and characterizing the distribution of host rocks, alteration assemblages, and minerals. Contrary to the multispectral sensors, the hyperspectral identifies targets with high spectral resolution. The Wayang Windu Geothermal field in West Java, Indonesia was selected as the study area due to the existence of surface manifestation and dense vegetation environment. Therefore, the effectiveness of hyperspectral remote sensing in tropical region was targeted as the study objective. The Spectral Angle Mapper (SAM) method was used to detect the occurrence of clay minerals spatially from Hyperion data. The SAM references of reflectance spectra were obtained from field observation at altered materials. To calculate the effectiveness of hyperspectral data, we used multispectral data from Landsat-8. The comparison method was conducted by comparing the SAM's rule images from Hyperion and Landsat-8, resulting that hyperspectral was more accurate than multispectral data. Hyperion SAM's rule images showed lower value compared to Landsat-8, the significant number derived from using Hyperion was about 24% better. This inferred that the hyperspectral remote sensing is preferable for mineral mapping even though vegetation covered study area.
A photogrammetric technique for generation of an accurate multispectral optical flow dataset
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2017-06-01
A presence of an accurate dataset is the key requirement for a successful development of an optical flow estimation algorithm. A large number of freely available optical flow datasets were developed in recent years and gave rise for many powerful algorithms. However most of the datasets include only images captured in the visible spectrum. This paper is focused on the creation of a multispectral optical flow dataset with an accurate ground truth. The generation of an accurate ground truth optical flow is a rather complex problem, as no device for error-free optical flow measurement was developed to date. Existing methods for ground truth optical flow estimation are based on hidden textures, 3D modelling or laser scanning. Such techniques are either work only with a synthetic optical flow or provide a sparse ground truth optical flow. In this paper a new photogrammetric method for generation of an accurate ground truth optical flow is proposed. The method combines the benefits of the accuracy and density of a synthetic optical flow datasets with the flexibility of laser scanning based techniques. A multispectral dataset including various image sequences was generated using the developed method. The dataset is freely available on the accompanying web site.
NASA Astrophysics Data System (ADS)
Shigeta, Yusuke; Sato, Naoto; Kuniyil Ajith Singh, Mithun; Agano, Toshitaka
2018-02-01
Photoacoustic imaging is a hybrid biomedical imaging modality that has emerged over the last decade. In photoacoustic imaging, pulsed-light absorbed by the target emits ultrasound that can be detected using a conventional ultrasound array. This ultrasound data can be used to reconstruct the location and spatial details of the intrinsic/extrinsic light absorbers in the tissue. Recently we reported on the development of a multi-wavelength high frame-rate LED-based photoacoustic/ultrasound imaging system (AcousticX). In this work, we photoacoustically characterize the absorption spectrum of ICG and porcine blood using LED arrays with multiple wavelengths (405, 420, 470, 520, 620, 660, 690, 750, 810, 850, 925, 980 nm). Measurements were performed in a simple reflection mode configuration in which LED arrays where fixed on both sides of the linear array ultrasound probe. Phantom used consisted of micro-test tubes filled with ICG and porcine blood, which were placed in a tank filled with water. The photoacoustic spectrum obtained from our measurements matches well with the reference absorption spectrum. These results demonstrate the potential capability of our system in performing clinical/pre-clinical multispectral photoacoustic imaging.
NASA Astrophysics Data System (ADS)
Kelly, M. A.; Boldt, J.; Wilson, J. P.; Yee, J. H.; Stoffler, R.
2017-12-01
The multi-spectral STereo Atmospheric Remote Sensing (STARS) concept has the objective to provide high-spatial and -temporal-resolution observations of 3D cloud structures related to hurricane development and other severe weather events. The rapid evolution of severe weather demonstrates a critical need for mesoscale observations of severe weather dynamics, but such observations are rare, particularly over the ocean where extratropical and tropical cyclones can undergo explosive development. Coincident space-based measurements of wind velocity and cloud properties at the mesoscale remain a great challenge, but are critically needed to improve the understanding and prediction of severe weather and cyclogenesis. STARS employs a mature stereoscopic imaging technique on two satellites (e.g. two CubeSats, two hosted payloads) to simultaneously retrieve cloud motion vectors (CMVs), cloud-top temperatures (CTTs), and cloud geometric heights (CGHs) from multi-angle, multi-spectral observations of cloud features. STARS is a pushbroom system based on separate wide-field-of-view co-boresighted multi-spectral cameras in the visible, midwave infrared (MWIR), and longwave infrared (LWIR) with high spatial resolution (better than 1 km). The visible system is based on a pan-chromatic, low-light imager to resolve cloud structures under nighttime illumination down to ¼ moon. The MWIR instrument, which is being developed as a NASA ESTO Instrument Incubator Program (IIP) project, is based on recent advances in MWIR detector technology that requires only modest cooling. The STARS payload provides flexible options for spaceflight due to its low size, weight, power (SWaP) and very modest cooling requirements. STARS also meets AF operational requirements for cloud characterization and theater weather imagery. In this paper, an overview of the STARS concept, including the high-level sensor design, the concept of operations, and measurement capability will be presented.
Automatic Hotspot and Sun Glint Detection in UAV Multispectral Images
Ortega-Terol, Damian; Ballesteros, Rocio
2017-01-01
Last advances in sensors, photogrammetry and computer vision have led to high-automation levels of 3D reconstruction processes for generating dense models and multispectral orthoimages from Unmanned Aerial Vehicle (UAV) images. However, these cartographic products are sometimes blurred and degraded due to sun reflection effects which reduce the image contrast and colour fidelity in photogrammetry and the quality of radiometric values in remote sensing applications. This paper proposes an automatic approach for detecting sun reflections problems (hotspot and sun glint) in multispectral images acquired with an Unmanned Aerial Vehicle (UAV), based on a photogrammetric strategy included in a flight planning and control software developed by the authors. In particular, two main consequences are derived from the approach developed: (i) different areas of the images can be excluded since they contain sun reflection problems; (ii) the cartographic products obtained (e.g., digital terrain model, orthoimages) and the agronomical parameters computed (e.g., normalized vegetation index-NVDI) are improved since radiometric defects in pixels are not considered. Finally, an accuracy assessment was performed in order to analyse the error in the detection process, getting errors around 10 pixels for a ground sample distance (GSD) of 5 cm which is perfectly valid for agricultural applications. This error confirms that the precision in the detection of sun reflections can be guaranteed using this approach and the current low-cost UAV technology. PMID:29036930
Automatic Hotspot and Sun Glint Detection in UAV Multispectral Images.
Ortega-Terol, Damian; Hernandez-Lopez, David; Ballesteros, Rocio; Gonzalez-Aguilera, Diego
2017-10-15
Last advances in sensors, photogrammetry and computer vision have led to high-automation levels of 3D reconstruction processes for generating dense models and multispectral orthoimages from Unmanned Aerial Vehicle (UAV) images. However, these cartographic products are sometimes blurred and degraded due to sun reflection effects which reduce the image contrast and colour fidelity in photogrammetry and the quality of radiometric values in remote sensing applications. This paper proposes an automatic approach for detecting sun reflections problems (hotspot and sun glint) in multispectral images acquired with an Unmanned Aerial Vehicle (UAV), based on a photogrammetric strategy included in a flight planning and control software developed by the authors. In particular, two main consequences are derived from the approach developed: (i) different areas of the images can be excluded since they contain sun reflection problems; (ii) the cartographic products obtained (e.g., digital terrain model, orthoimages) and the agronomical parameters computed (e.g., normalized vegetation index-NVDI) are improved since radiometric defects in pixels are not considered. Finally, an accuracy assessment was performed in order to analyse the error in the detection process, getting errors around 10 pixels for a ground sample distance (GSD) of 5 cm which is perfectly valid for agricultural applications. This error confirms that the precision in the detection of sun reflections can be guaranteed using this approach and the current low-cost UAV technology.
Implementation and evaluation of ILLIAC 4 algorithms for multispectral image processing
NASA Technical Reports Server (NTRS)
Swain, P. H.
1974-01-01
Data concerning a multidisciplinary and multi-organizational effort to implement multispectral data analysis algorithms on a revolutionary computer, the Illiac 4, are reported. The effectiveness and efficiency of implementing the digital multispectral data analysis techniques for producing useful land use classifications from satellite collected data were demonstrated.
Effects of spatial resolution ratio in image fusion
Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.
2008-01-01
In image fusion, the spatial resolution ratio can be defined as the ratio between the spatial resolution of the high-resolution panchromatic image and that of the low-resolution multispectral image. This paper attempts to assess the effects of the spatial resolution ratio of the input images on the quality of the fused image. Experimental results indicate that a spatial resolution ratio of 1:10 or higher is desired for optimal multisensor image fusion provided the input panchromatic image is not downsampled to a coarser resolution. Due to the synthetic pixels generated from resampling, the quality of the fused image decreases as the spatial resolution ratio decreases (e.g. from 1:10 to 1:30). However, even with a spatial resolution ratio as small as 1:30, the quality of the fused image is still better than the original multispectral image alone for feature interpretation. In cases where the spatial resolution ratio is too small (e.g. 1:30), to obtain better spectral integrity of the fused image, one may downsample the input high-resolution panchromatic image to a slightly lower resolution before fusing it with the multispectral image.
Distant Determination of Bilirubin Distribution in Skin by Multi-Spectral Imaging
NASA Astrophysics Data System (ADS)
Saknite, I.; Jakovels, D.; Spigulis, J.
2011-01-01
For mapping the bilirubin distribution in bruised skin the multi-spectral imaging technique was employed, which made it possible to observe temporal changes of the bilirubin content in skin photo-types II and III. The obtained results confirm the clinical potential of this technique for skin bilirubin diagnostics.
USDA-ARS?s Scientific Manuscript database
Using unmanned aircraft systems (UAS) as remote sensing platforms offers the unique ability for repeated deployment for acquisition of high temporal resolution data at very high spatial resolution. Most image acquisitions from UAS have been in the visible bands, while multispectral remote sensing ap...
Multispectral fluorescence image algorithms for detection of frass on mature tomatoes
USDA-ARS?s Scientific Manuscript database
A multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet LED excitation was developed for the detection of frass contamination on mature tomatoes. The algorithm utilized the fluorescence intensities at five wavebands, 515 nm, 640 nm, 664 nm, 690 nm, and 724 nm...
Analysis of variograms with various sample sizes from a multispectral image
USDA-ARS?s Scientific Manuscript database
Variograms play a crucial role in remote sensing application and geostatistics. In this study, the analysis of variograms with various sample sizes of remotely sensed data was conducted. A 100 X 100 pixel subset was chosen from an aerial multispectral image which contained three wavebands, green, ...
NASA Astrophysics Data System (ADS)
Dong, Yang; He, Honghui; He, Chao; Ma, Hui
2017-02-01
Mueller matrix polarimetry is a powerful tool for detecting microscopic structures, therefore can be used to monitor physiological changes of tissue samples. Meanwhile, spectral features of scattered light can also provide abundant microstructural information of tissues. In this paper, we take the 2D multispectral backscattering Mueller matrix images of bovine skeletal muscle tissues, and analyze their temporal variation behavior using multispectral Mueller matrix parameters. The 2D images of the Mueller matrix elements are reduced to the multispectral frequency distribution histograms (mFDHs) to reveal the dominant structural features of the muscle samples more clearly. For quantitative analysis, the multispectral Mueller matrix transformation (MMT) parameters are calculated to characterize the microstructural variations during the rigor mortis and proteolysis processes of the skeletal muscle tissue samples. The experimental results indicate that the multispectral MMT parameters can be used to judge different physiological stages for bovine skeletal muscle tissues in 24 hours, and combining with the multispectral technique, the Mueller matrix polarimetry and FDH analysis can monitor the microstructural variation features of skeletal muscle samples. The techniques may be used for quick assessment and quantitative monitoring of meat qualities in food industry.
NASA Technical Reports Server (NTRS)
Friedman, J. D.; Frank, D. G.; Preble, D.; Painter, J. E.
1973-01-01
A combination of infrared images depicting areas of thermal emission and ground calibration points have proved to be particularly useful in plotting time-dependent changes in surface temperatures and radiance and in delimiting areas of predominantly convective heat flow to the earth's surface in the Cascade Range and on Surtsey Volcano, Iceland. In an integrated experiment group using ERTS-1 multispectral scanner (MSS) and aircraft infrared imaging systems in conjunction with multiple thermistor arrays, volcano surface temperatures are relayed daily to Washington via data communication platform (DCP) transmitters and ERTS-1. ERTS-1 MSS imagery has revealed curvilinear structures at Lassen, the full extent of which have not been previously mapped. Interestingly, the major surface thermal manifestations at Lassen are aligned along these structures, particularly in the Warner Valley.
NASA Astrophysics Data System (ADS)
Heleno, Sandra; Matias, Magda; Pina, Pedro; Sousa, António Jorge
2016-04-01
A method for semiautomated landslide detection and mapping, with the ability to separate source and run-out areas, is presented in this paper. It combines object-based image analysis and a support vector machine classifier and is tested using a GeoEye-1 multispectral image, sensed 3 days after a major damaging landslide event that occurred on Madeira Island (20 February 2010), and a pre-event lidar digital terrain model. The testing is developed in a 15 km2 wide study area, where 95 % of the number of landslides scars are detected by this supervised approach. The classifier presents a good performance in the delineation of the overall landslide area, with commission errors below 26 % and omission errors below 24 %. In addition, fair results are achieved in the separation of the source from the run-out landslide areas, although in less illuminated slopes this discrimination is less effective than in sunnier, east-facing slopes.
Unmixing chromophores in human skin with a 3D multispectral optoacoustic mesoscopy system
NASA Astrophysics Data System (ADS)
Schwarz, Mathias; Aguirre, Juan; Soliman, Dominik; Buehler, Andreas; Ntziachristos, Vasilis
2016-03-01
The absorption of visible light by human skin is governed by a number of natural chromophores: Eumelanin, pheomelanin, oxyhemoglobin, and deoxyhemoglobin are the major absorbers in the visible range in cutaneous tissue. Label-free quantification of these tissue chromophores is an important step of optoacoustic (photoacoustic) imaging towards clinical application, since it provides relevant information in diseases. In tumor cells, for instance, there are metabolic changes (Warburg effect) compared to healthy cells, leading to changes in oxygenation in the environment of tumors. In malignant melanoma changes in the absorption spectrum have been observed compared to the spectrum of nonmalignant nevi. So far, optoacoustic imaging has been applied to human skin mostly in single-wavelength mode, providing anatomical information but no functional information. In this work, we excited the tissue by a tunable laser source in the spectral range from 413-680 nm with a repetition rate of 50 Hz. The laser was operated in wavelengthsweep mode emitting consecutive pulses at various wavelengths that allowed for automatic co-registration of the multispectral datasets. The multispectral raster-scan optoacoustic mesoscopy (MSOM) system provides a lateral resolution of <60 μm independent of wavelength. Based on the known absorption spectra of melanin, oxyhemoglobin, and deoxyhemoglobin, three-dimensional absorption maps of all three absorbers were calculated from the multispectral dataset.
NASA Technical Reports Server (NTRS)
Andersen, K. E.
1982-01-01
The format of high density tapes which contain partially processed LANDSAT 4 and LANDSAT D prime MSS image data is defined. This format is based on and is compatible with the existing format for partially processed LANDSAT 3 MSS image data HDTs.
Multi-Image Registration for an Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn
2002-01-01
An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.
NASA Astrophysics Data System (ADS)
Colaninno, Nicola; Marambio Castillo, Alejandro; Roca Cladera, Josep
2017-10-01
The demand for remotely sensed data is growing increasingly, due to the possibility of managing information about huge geographic areas, in digital format, at different time periods, and suitable for analysis in GIS platforms. However, primary satellite information is not such immediate as desirable. Beside geometric and atmospheric limitations, clouds, cloud shadows, and haze generally contaminate optical images. In terms of land cover, such a contamination is intended as missing information and should be replaced. Generally, image reconstruction is classified according to three main approaches, i.e. in-painting-based, multispectral-based, and multitemporal-based methods. This work relies on a multitemporal-based approach to retrieve uncontaminated pixels for an image scene. We explore an automatic method for quickly getting daytime cloudless and shadow-free image at moderate spatial resolution for large geographical areas. The process expects two main steps: a multitemporal effect adjustment to avoid significant seasonal variations, and a data reconstruction phase, based on automatic selection of uncontaminated pixels from an image stack. The result is a composite image based on middle values of the stack, over a year. The assumption is that, for specific purposes, land cover changes at a coarse scale are not significant over relatively short time periods. Because it is largely recognized that satellite imagery along tropical areas are generally strongly affected by clouds, the methodology is tested for the case study of the Dominican Republic at the year 2015; while Landsat 8 imagery are employed to test the approach.
The fabrication of a multi-spectral lens array and its application in assisting color blindness
NASA Astrophysics Data System (ADS)
Di, Si; Jin, Jian; Tang, Guanrong; Chen, Xianshuai; Du, Ruxu
2016-01-01
This article presents a compact multi-spectral lens array and describes its application in assisting color-blindness. The lens array consists of 9 microlens, and each microlens is coated with a different color filter. Thus, it can capture different light bands, including red, orange, yellow, green, cyan, blue, violet, near-infrared, and the entire visible band. First, the fabrication process is described in detail. Second, an imaging system is setup and a color blindness testing card is selected as the sample. By the system, the vision results of normal people and color blindness can be captured simultaneously. Based on the imaging results, it is possible to be used for helping color-blindness to recover normal vision.
Image Classification Workflow Using Machine Learning Methods
NASA Astrophysics Data System (ADS)
Christoffersen, M. S.; Roser, M.; Valadez-Vergara, R.; Fernández-Vega, J. A.; Pierce, S. A.; Arora, R.
2016-12-01
Recent increases in the availability and quality of remote sensing datasets have fueled an increasing number of scientifically significant discoveries based on land use classification and land use change analysis. However, much of the software made to work with remote sensing data products, specifically multispectral images, is commercial and often prohibitively expensive. The free to use solutions that are currently available come bundled up as small parts of much larger programs that are very susceptible to bugs and difficult to install and configure. What is needed is a compact, easy to use set of tools to perform land use analysis on multispectral images. To address this need, we have developed software using the Python programming language with the sole function of land use classification and land use change analysis. We chose Python to develop our software because it is relatively readable, has a large body of relevant third party libraries such as GDAL and Spectral Python, and is free to install and use on Windows, Linux, and Macintosh operating systems. In order to test our classification software, we performed a K-means unsupervised classification, Gaussian Maximum Likelihood supervised classification, and a Mahalanobis Distance based supervised classification. The images used for testing were three Landsat rasters of Austin, Texas with a spatial resolution of 60 meters for the years of 1984 and 1999, and 30 meters for the year 2015. The testing dataset was easily downloaded using the Earth Explorer application produced by the USGS. The software should be able to perform classification based on any set of multispectral rasters with little to no modification. Our software makes the ease of land use classification using commercial software available without an expensive license.
Nam, Hyeong Soo; Kang, Woo Jae; Lee, Min Woo; Song, Joon Woo; Kim, Jin Won; Oh, Wang-Yuhl; Yoo, Hongki
2018-01-01
The pathophysiological progression of chronic diseases, including atherosclerosis and cancer, is closely related to compositional changes in biological tissues containing endogenous fluorophores such as collagen, elastin, and NADH, which exhibit strong autofluorescence under ultraviolet excitation. Fluorescence lifetime imaging (FLIm) provides robust detection of the compositional changes by measuring fluorescence lifetime, which is an inherent property of a fluorophore. In this paper, we present a dual-modality system combining a multispectral analog-mean-delay (AMD) FLIm and a high-speed swept-source optical coherence tomography (OCT) to simultaneously visualize the cross-sectional morphology and biochemical compositional information of a biological tissue. Experiments using standard fluorescent solutions showed that the fluorescence lifetime could be measured with a precision of less than 40 psec using the multispectral AMD-FLIm without averaging. In addition, we performed ex vivo imaging on rabbit iliac normal-looking and atherosclerotic specimens to demonstrate the feasibility of the combined FLIm-OCT system for atherosclerosis imaging. We expect that the combined FLIm-OCT will be a promising next-generation imaging technique for diagnosing atherosclerosis and cancer due to the advantages of the proposed label-free high-precision multispectral lifetime measurement. PMID:29675330
USDA-ARS?s Scientific Manuscript database
The amount of visible and near infrared light reflected by plants varies depending on their health. In this study, multispectral images were acquired by quadcopter for detecting tomato spot wilt virus amongst twenty genetic varieties of peanuts. The plants were visually assessed to acquire ground ...
An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images.
Gumaei, Abdu; Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed
2018-05-15
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang's method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used.
Wide field-of-view dual-band multispectral muzzle flash detection
NASA Astrophysics Data System (ADS)
Montoya, J.; Melchor, J.; Spiliotis, P.; Taplin, L.
2013-06-01
Sensor technologies are undergoing revolutionary advances, as seen in the rapid growth of multispectral methodologies. Increases in spatial, spectral, and temporal resolution, and in breadth of spectral coverage, render feasible sensors that function with unprecedented performance. A system was developed that addresses many of the key hardware requirements for a practical dual-band multispectral acquisition system, including wide field of view and spectral/temporal shift between dual bands. The system was designed using a novel dichroic beam splitter and dual band-pass filter configuration that creates two side-by-side images of a scene on a single sensor. A high-speed CMOS sensor was used to simultaneously capture data from the entire scene in both spectral bands using a short focal-length lens that provided a wide field-of-view. The beam-splitter components were arranged such that the two images were maintained in optical alignment and real-time intra-band processing could be carried out using only simple arithmetic on the image halves. An experiment related to limitations of the system to address multispectral detection requirements was performed. This characterized the system's low spectral variation across its wide field of view. This paper provides lessons learned on the general limitation of key hardware components required for multispectral muzzle flash detection, using the system as a hardware example combined with simulated multispectral muzzle flash and background signatures.
A new hyperspectral image compression paradigm based on fusion
NASA Astrophysics Data System (ADS)
Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto
2016-10-01
The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.
Multispectral imaging of the ocular fundus using light emitting diode illumination
NASA Astrophysics Data System (ADS)
Everdell, N. L.; Styles, I. B.; Calcagni, A.; Gibson, J.; Hebden, J.; Claridge, E.
2010-09-01
We present an imaging system based on light emitting diode (LED) illumination that produces multispectral optical images of the human ocular fundus. It uses a conventional fundus camera equipped with a high power LED light source and a highly sensitive electron-multiplying charge coupled device camera. It is able to take pictures at a series of wavelengths in rapid succession at short exposure times, thereby eliminating the image shift introduced by natural eye movements (saccades). In contrast with snapshot systems the images retain full spatial resolution. The system is not suitable for applications where the full spectral resolution is required as it uses discrete wavebands for illumination. This is not a problem in retinal imaging where the use of selected wavelengths is common. The modular nature of the light source allows new wavelengths to be introduced easily and at low cost. The use of wavelength-specific LEDs as a source is preferable to white light illumination and subsequent filtering of the remitted light as it minimizes the total light exposure of the subject. The system is controlled via a graphical user interface that enables flexible control of intensity, duration, and sequencing of sources in synchrony with the camera. Our initial experiments indicate that the system can acquire multispectral image sequences of the human retina at exposure times of 0.05 s in the range of 500-620 nm with mean signal to noise ratio of 17 dB (min 11, std 4.5), making it suitable for quantitative analysis with application to the diagnosis and screening of eye diseases such as diabetic retinopathy and age-related macular degeneration.
Multispectral imaging of the ocular fundus using light emitting diode illumination.
Everdell, N L; Styles, I B; Calcagni, A; Gibson, J; Hebden, J; Claridge, E
2010-09-01
We present an imaging system based on light emitting diode (LED) illumination that produces multispectral optical images of the human ocular fundus. It uses a conventional fundus camera equipped with a high power LED light source and a highly sensitive electron-multiplying charge coupled device camera. It is able to take pictures at a series of wavelengths in rapid succession at short exposure times, thereby eliminating the image shift introduced by natural eye movements (saccades). In contrast with snapshot systems the images retain full spatial resolution. The system is not suitable for applications where the full spectral resolution is required as it uses discrete wavebands for illumination. This is not a problem in retinal imaging where the use of selected wavelengths is common. The modular nature of the light source allows new wavelengths to be introduced easily and at low cost. The use of wavelength-specific LEDs as a source is preferable to white light illumination and subsequent filtering of the remitted light as it minimizes the total light exposure of the subject. The system is controlled via a graphical user interface that enables flexible control of intensity, duration, and sequencing of sources in synchrony with the camera. Our initial experiments indicate that the system can acquire multispectral image sequences of the human retina at exposure times of 0.05 s in the range of 500-620 nm with mean signal to noise ratio of 17 dB (min 11, std 4.5), making it suitable for quantitative analysis with application to the diagnosis and screening of eye diseases such as diabetic retinopathy and age-related macular degeneration.
Image-algebraic design of multispectral target recognition algorithms
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.
1994-06-01
In this paper, we discuss methods for multispectral ATR (Automated Target Recognition) of small targets that are sensed under suboptimal conditions, such as haze, smoke, and low light levels. In particular, we discuss our ongoing development of algorithms and software that effect intelligent object recognition by selecting ATR filter parameters according to ambient conditions. Our algorithms are expressed in terms of IA (image algebra), a concise, rigorous notation that unifies linear and nonlinear mathematics in the image processing domain. IA has been implemented on a variety of parallel computers, with preprocessors available for the Ada and FORTRAN languages. An image algebra C++ class library has recently been made available. Thus, our algorithms are both feasible implementationally and portable to numerous machines. Analyses emphasize the aspects of image algebra that aid the design of multispectral vision algorithms, such as parameterized templates that facilitate the flexible specification of ATR filters.
NASA Technical Reports Server (NTRS)
Barrett, Eamon B. (Editor); Pearson, James J. (Editor)
1989-01-01
Image understanding concepts and models, image understanding systems and applications, advanced digital processors and software tools, and advanced man-machine interfaces are among the topics discussed. Particular papers are presented on such topics as neural networks for computer vision, object-based segmentation and color recognition in multispectral images, the application of image algebra to image measurement and feature extraction, and the integration of modeling and graphics to create an infrared signal processing test bed.
The development of machine technology processing for earth resource survey
NASA Technical Reports Server (NTRS)
Landgrebe, D. A.
1970-01-01
The following technologies are considered for automatic processing of earth resources data: (1) registration of multispectral and multitemporal images, (2) digital image display systems, (3) data system parameter effects on satellite remote sensing systems, and (4) data compression techniques based on spectral redundancy. The importance of proper spectral band and compression algorithm selections is pointed out.
Multispectral image analysis for object recognition and classification
NASA Astrophysics Data System (ADS)
Viau, C. R.; Payeur, P.; Cretu, A.-M.
2016-05-01
Computer and machine vision applications are used in numerous fields to analyze static and dynamic imagery in order to assist or automate decision-making processes. Advancements in sensor technologies now make it possible to capture and visualize imagery at various wavelengths (or bands) of the electromagnetic spectrum. Multispectral imaging has countless applications in various fields including (but not limited to) security, defense, space, medical, manufacturing and archeology. The development of advanced algorithms to process and extract salient information from the imagery is a critical component of the overall system performance. The fundamental objective of this research project was to investigate the benefits of combining imagery from the visual and thermal bands of the electromagnetic spectrum to improve the recognition rates and accuracy of commonly found objects in an office setting. A multispectral dataset (visual and thermal) was captured and features from the visual and thermal images were extracted and used to train support vector machine (SVM) classifiers. The SVM's class prediction ability was evaluated separately on the visual, thermal and multispectral testing datasets.
NASA Astrophysics Data System (ADS)
Pu, Huangsheng; Zhang, Guanglei; He, Wei; Liu, Fei; Guang, Huizhi; Zhang, Yue; Bai, Jing; Luo, Jianwen
2014-09-01
It is a challenging problem to resolve and identify drug (or non-specific fluorophore) distribution throughout the whole body of small animals in vivo. In this article, an algorithm of unmixing multispectral fluorescence tomography (MFT) images based on independent component analysis (ICA) is proposed to solve this problem. ICA is used to unmix the data matrix assembled by the reconstruction results from MFT. Then the independent components (ICs) that represent spatial structures and the corresponding spectrum courses (SCs) which are associated with spectral variations can be obtained. By combining the ICs with SCs, the recovered MFT images can be generated and fluorophore concentration can be calculated. Simulation studies, phantom experiments and animal experiments with different concentration contrasts and spectrum combinations are performed to test the performance of the proposed algorithm. Results demonstrate that the proposed algorithm can not only provide the spatial information of fluorophores, but also recover the actual reconstruction of MFT images.
NASA Astrophysics Data System (ADS)
Kannadorai, Ravi Kumar; Udumala, Sunil Kumar; Sidney, Yu Wing Kwong
2016-12-01
Noninvasive and nonradioactive imaging modality to track and image apoptosis during chemotherapy of triple negative breast cancer is much needed for an effective treatment plan. Phosphatidylserine (PS) is a biomarker transiently exposed on the outer surface of the cells during apoptosis. Its externalization occurs within a few hours of an apoptotic stimulus by a chemotherapy drug and leads to presentation of millions of phospholipid molecules per apoptotic cell on the cell surface. This makes PS an abundant and accessible target for apoptosis imaging. In the current work, we show that PS monoclonal antibody tagged with indocyanine green (ICG) can help to track and image apoptosis using multispectral optoacoustic tomography in vivo. When compared to saline control, the doxorubicin treated group showed a significant increase in uptake of ICG-PS monoclonal antibody in triple negative breast tumor xenografted in NCr nude female mice. Day 5 posttreatment had the highest optoacoustic signal in the tumor region, indicating maximum apoptosis and the tumor subsequently shrank. Since multispectral optoacoustic imaging does not involve the use of radioactivity, the longer the circulatory time of the PS antibody can be exploited to monitor apoptosis over a period of time without multiple injections of commonly used imaging probes such as Tc-99m Annexin V or F-18 ML10. The proposed apoptosis imaging technique involving multispectral optoacoustic tomography, monoclonal antibody, and near-infrared absorbing fluorescent marker can be an effective tool for imaging apoptosis and treatment planning.
Time-resolved multispectral imaging of combustion reactions
NASA Astrophysics Data System (ADS)
Huot, Alexandrine; Gagnon, Marc-André; Jahjah, Karl-Alexandre; Tremblay, Pierre; Savary, Simon; Farley, Vincent; Lagueux, Philippe; Guyot, Éric; Chamberland, Martin; Marcotte, Frédérick
2015-10-01
Thermal infrared imaging is a field of science that evolves rapidly. Scientists have used for years the simplest tool: thermal broadband cameras. These allow to perform target characterization in both the longwave (LWIR) and midwave (MWIR) infrared spectral range. Infrared thermal imaging is used for a wide range of applications, especially in the combustion domain. For example, it can be used to follow combustion reactions, in order to characterize the injection and the ignition in a combustion chamber or even to observe gases produced by a flare or smokestack. Most combustion gases, such as carbon dioxide (CO2), selectively absorb/emit infrared radiation at discrete energies, i.e. over a very narrow spectral range. Therefore, temperatures derived from broadband imaging are not reliable without prior knowledge of spectral emissivity. This information is not directly available from broadband images. However, spectral information is available using spectral filters. In this work, combustion analysis was carried out using a Telops MS-IR MW camera, which allows multispectral imaging at a high frame rate. A motorized filter wheel allowing synchronized acquisitions on eight (8) different channels was used to provide time-resolved multispectral imaging of combustion products of a candle in which black powder has been burnt to create a burst. It was then possible to estimate the temperature by modeling spectral profiles derived from information obtained with the different spectral filters. Comparison with temperatures obtained using conventional broadband imaging illustrates the benefits of time-resolved multispectral imaging for the characterization of combustion processes.
Time-resolved multispectral imaging of combustion reaction
NASA Astrophysics Data System (ADS)
Huot, Alexandrine; Gagnon, Marc-André; Jahjah, Karl-Alexandre; Tremblay, Pierre; Savary, Simon; Farley, Vincent; Lagueux, Philippe; Guyot, Éric; Chamberland, Martin; Marcotte, Fréderick
2015-05-01
Thermal infrared imaging is a field of science that evolves rapidly. Scientists have used for years the simplest tool: thermal broadband cameras. This allows to perform target characterization in both the longwave (LWIR) and midwave (MWIR) infrared spectral range. Infrared thermal imaging is used for a wide range of applications, especially in the combustion domain. For example, it can be used to follow combustion reactions, in order to characterize the injection and the ignition in a combustion chamber or even to observe gases produced by a flare or smokestack. Most combustion gases such as carbon dioxide (CO2) selectively absorb/emit infrared radiation at discrete energies, i.e. over a very narrow spectral range. Therefore, temperatures derived from broadband imaging are not reliable without prior knowledge about spectral emissivity. This information is not directly available from broadband images. However, spectral information is available using spectral filters. In this work, combustion analysis was carried out using Telops MS-IR MW camera which allows multispectral imaging at a high frame rate. A motorized filter wheel allowing synchronized acquisitions on eight (8) different channels was used to provide time-resolved multispectral imaging of combustion products of a candle in which black powder has been burnt to create a burst. It was then possible to estimate the temperature by modeling spectral profile derived from information obtained with the different spectral filters. Comparison with temperatures obtained using conventional broadband imaging illustrates the benefits of time-resolved multispectral imaging for the characterization of combustion processes.
NASA Astrophysics Data System (ADS)
Aiello, Martina; Gianinetto, Marco
2017-10-01
Marine routes represent a huge portion of commercial and human trades, therefore surveillance, security and environmental protection themes are gaining increasing importance. Being able to overcome the limits imposed by terrestrial means of monitoring, ship detection from satellite has recently prompted a renewed interest for a continuous monitoring of illegal activities. This paper describes an automatic Object Based Image Analysis (OBIA) approach to detect vessels made of different materials in various sea environments. The combined use of multispectral and SAR images allows for a regular observation unrestricted by lighting and atmospheric conditions and complementarity in terms of geographic coverage and geometric detail. The method developed adopts a region growing algorithm to segment the image in homogeneous objects, which are then classified through a decision tree algorithm based on spectral and geometrical properties. Then, a spatial analysis retrieves the vessels' position, length and heading parameters and a speed range is associated. Optimization of the image processing chain is performed by selecting image tiles through a statistical index. Vessel candidates are detected over amplitude SAR images using an adaptive threshold Constant False Alarm Rate (CFAR) algorithm prior the object based analysis. Validation is carried out by comparing the retrieved parameters with the information provided by the Automatic Identification System (AIS), when available, or with manual measurement when AIS data are not available. The estimation of length shows R2=0.85 and estimation of heading R2=0.92, computed as the average of R2 values obtained for both optical and radar images.
Ground-Based Remote Sensing of Water-Stressed Crops: Thermal and Multispectral Imaging
USDA-ARS?s Scientific Manuscript database
Ground-based methods of remote sensing can be used as ground-truthing for satellite-based remote sensing, and in some cases may be a more affordable means of obtaining such data. Plant canopy temperature has been used to indicate and quantify plant water stress. A field research study was conducted ...
Ground-based thermal and multispectral imaging of limited irrigation crops
USDA-ARS?s Scientific Manuscript database
Ground-based methods of remote sensing can be used as ground-truth for satellite-based remote sensing, and in some cases may be a more affordable means of obtaining such data. Plant canopy temperature has been used to indicate and quantify plant water stress. A field research study was conducted in ...
NASA Astrophysics Data System (ADS)
Joshi, Bishnu P.; Miller, Sharon J.; Lee, Cameron; Gustad, Adam; Seibel, Eric J.; Wang, Thomas D.
2012-02-01
We demonstrate a multi-spectral scanning fiber endoscope (SFE) that collects fluorescence images in vivo from three target peptides that bind specifically to murine colonic adenomas. This ultrathin endoscope was demonstrated in a genetically engineered mouse model of spontaneous colorectal adenomas based on somatic Apc (adenomatous polyposis coli) gene inactivation. The SFE delivers excitation at 440, 532, 635 nm with <2 mW per channel. The target 7-mer peptides were conjugated to visible organic dyes, including 7-Diethylaminocoumarin-3-carboxylic acid (DEAC) (λex=432 nm, λem=472 nm), 5-Carboxytetramethylrhodamine (5-TAMRA) (λex=535 nm, λem=568 nm), and CF-633 (λex=633 nm, λem=650 nm). Target peptides were first validated using techniques of pfu counting, flow cytometry and previously established methods of fluorescence endoscopy. Peptides were applied individually or in combination and detected with fluorescence imaging. The ability to image multiple channels of fluorescence concurrently was successful for all three channels in vitro, while two channels were resolved simultaneously in vivo. Selective binding of the peptide was evident to adenomas and not to adjacent normal-appearing mucosa. Multispectral wide-field fluorescence detection using the SFE is achievable, and this technology has potential to advance early cancer detection and image-guided therapy in human patients by simultaneously visualizing multiple over expressed molecular targets unique to dysplasia.
Vrešak, Martina; Halkjaer Olesen, Merete; Gislum, René; Bavec, Franc; Ravn Jørgensen, Johannes
2016-01-01
Application of rapid and time-efficient health diagnostic and identification technology in the seed industry chain could accelerate required analysis, characteristic description and also ultimately availability of new desired varieties. The aim of the study was to evaluate the potential of multispectral imaging and single kernel near-infrared spectroscopy (SKNIR) for determination of seed health and variety separation of winter wheat (Triticum aestivum L.) and winter triticale (Triticosecale Wittm. & Camus). The analysis, carried out in autumn 2013 at AU-Flakkebjerg, Denmark, included nine winter triticale varieties and 27 wheat varieties provided by the Faculty of Agriculture and Life Sciences Maribor, Slovenia. Fusarium sp. and black point disease-infected parts of the seed surface could successfully be distinguished from uninfected parts with use of a multispectral imaging device (405–970 nm wavelengths). SKNIR was applied in this research to differentiate all 36 involved varieties based on spectral differences due to variation in the chemical composition. The study produced an interesting result of successful distinguishing between the infected and uninfected parts of the seed surface. Furthermore, the study was able to distinguish between varieties. Together these components could be used in further studies for the development of a sorting model by combining data from multispectral imaging and SKNIR for identifying disease(s) and varieties. PMID:27010656
NASA Astrophysics Data System (ADS)
Xia, Wenfeng; Nikitichev, Daniil I.; Mari, Jean Martial; West, Simeon J.; Ourselin, Sebastien; Beard, Paul C.; Desjardins, Adrien E.
2015-07-01
Precise and efficient guidance of medical devices is of paramount importance for many minimally invasive procedures. These procedures include fetal interventions, tumor biopsies and treatments, central venous catheterisations and peripheral nerve blocks. Ultrasound imaging is commonly used for guidance, but it often provides insufficient contrast with which to identify soft tissue structures such as vessels, tumors, and nerves. In this study, a hybrid interventional imaging system that combines ultrasound imaging and multispectral photoacoustic imaging for guiding minimally invasive procedures was developed and characterized. The system provides both structural information from ultrasound imaging and molecular information from multispectral photoacoustic imaging. It uses a commercial linear-array ultrasound imaging probe as the ultrasound receiver, with a multimode optical fiber embedded in a needle to deliver pulsed excitation light to tissue. Co-registration of ultrasound and photoacoustic images is achieved with the use of the same ultrasound receiver for both modalities. Using tissue ex vivo, the system successfully discriminated deep-located fat tissue from the surrounding muscle tissue. The measured photoacoustic spectrum of the fat tissue had good agreement with the lipid spectrum in literature.
NASA Technical Reports Server (NTRS)
Shimabukuro, Yosio Edemir; Smith, James A.
1991-01-01
Constrained-least-squares and weighted-least-squares mixing models for generating fraction images derived from remote sensing multispectral data are presented. An experiment considering three components within the pixels-eucalyptus, soil (understory), and shade-was performed. The generated fraction images for shade (shade image) derived from these two methods were compared by considering the performance and computer time. The derived shade images are related to the observed variation in forest structure, i.e., the fraction of inferred shade in the pixel is related to different eucalyptus ages.
NASA Technical Reports Server (NTRS)
Edgett, Kenneth S.; Anderson, Donald L.
1995-01-01
This paper describes an empirical method to correct TIMS (Thermal Infrared Multispectral Scanner) data for atmospheric effects by transferring calibration from a laboratory thermal emission spectrometer to the TIMS multispectral image. The method does so by comparing the laboratory spectra of samples gathered in the field with TIMS 6-point spectra for pixels at the location of field sampling sites. The transference of calibration also makes it possible to use spectra from the laboratory as endmembers in unmixing studies of TIMS data.
Imaging of Melanin Disruption in Age-Related Macular Degeneration Using Multispectral Imaging.
Dugel, Pravin U; Zimmer, Cheryl N
2016-02-01
To investigate whether multispectral imaging (MSI) is able to obtain a noninvasive view of melanin disruption associated with age-related macular degeneration (AMD), which could support early diagnosis and potential treatment strategies. A single retinal center, retrospective, observational, image analysis study of MSI images of 43 patients was done to determine the extent of melanin pigment exhibited in association with AMD, based on the Age-Related Eye Disease Study classification and grading scale. Corresponding fundus photos were also graded for 12 of the eyes. Fifty-one of 61 eyes (84%) of 43 patients with AMD were determined to have melanin disruption in their MSI images in at least the central and/or one of four inner ETDRS areas. There was a relationship between severity of disease and the degree of melanin disruption. The sensitivity of fundus photography for melanin pigment as compared to MSI was only 62.5%, with three false-negatives. A direct, noninvasive, unobstructed view of melanin disruption associated with AMD can be observed using MSI. Copyright 2016, SLACK Incorporated.
Bandwidth compression of multispectral satellite imagery
NASA Technical Reports Server (NTRS)
Habibi, A.
1978-01-01
The results of two studies aimed at developing efficient adaptive and nonadaptive techniques for compressing the bandwidth of multispectral images are summarized. These techniques are evaluated and compared using various optimality criteria including MSE, SNR, and recognition accuracy of the bandwidth compressed images. As an example of future requirements, the bandwidth requirements for the proposed Landsat-D Thematic Mapper are considered.
NASA Astrophysics Data System (ADS)
Dementev, A. O.; Dmitriev, E. V.; Kozoderov, V. V.; Egorov, V. D.
2017-10-01
Hyperspectral imaging is up-to-date promising technology widely applied for the accurate thematic mapping. The presence of a large number of narrow survey channels allows us to use subtle differences in spectral characteristics of objects and to make a more detailed classification than in the case of using standard multispectral data. The difficulties encountered in the processing of hyperspectral images are usually associated with the redundancy of spectral information which leads to the problem of the curse of dimensionality. Methods currently used for recognizing objects on multispectral and hyperspectral images are usually based on standard base supervised classification algorithms of various complexity. Accuracy of these algorithms can be significantly different depending on considered classification tasks. In this paper we study the performance of ensemble classification methods for the problem of classification of the forest vegetation. Error correcting output codes and boosting are tested on artificial data and real hyperspectral images. It is demonstrates, that boosting gives more significant improvement when used with simple base classifiers. The accuracy in this case in comparable the error correcting output code (ECOC) classifier with Gaussian kernel SVM base algorithm. However the necessity of boosting ECOC with Gaussian kernel SVM is questionable. It is demonstrated, that selected ensemble classifiers allow us to recognize forest species with high enough accuracy which can be compared with ground-based forest inventory data.
Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; ...
2014-12-09
We present results from an ongoing effort to extend neuromimetic machine vision algorithms to multispectral data using adaptive signal processing combined with compressive sensing and machine learning techniques. Our goal is to develop a robust classification methodology that will allow for automated discretization of the landscape into distinct units based on attributes such as vegetation, surface hydrological properties, and topographic/geomorphic characteristics. We use a Hebbian learning rule to build spectral-textural dictionaries that are tailored for classification. We learn our dictionaries from millions of overlapping multispectral image patches and then use a pursuit search to generate classification features. Land cover labelsmore » are automatically generated using unsupervised clustering of sparse approximations (CoSA). We demonstrate our method on multispectral WorldView-2 data from a coastal plain ecosystem in Barrow, Alaska. We explore learning from both raw multispectral imagery and normalized band difference indices. We explore a quantitative metric to evaluate the spectral properties of the clusters in order to potentially aid in assigning land cover categories to the cluster labels. In this study, our results suggest CoSA is a promising approach to unsupervised land cover classification in high-resolution satellite imagery.« less
NASA Astrophysics Data System (ADS)
Kim, H. O.; Yeom, J. M.
2014-12-01
Space-based remote sensing in agriculture is particularly relevant to issues such as global climate change, food security, and precision agriculture. Recent satellite missions have opened up new perspectives by offering high spatial resolution, various spectral properties, and fast revisit rates to the same regions. Here, we examine the utility of broadband red-edge spectral information in multispectral satellite image data for classifying paddy rice crops in South Korea. Additionally, we examine how object-based spectral features affect the classification of paddy rice growth stages. For the analysis, two seasons of RapidEye satellite image data were used. The results showed that the broadband red-edge information slightly improved the classification accuracy of the crop condition in heterogeneous paddy rice crop environments, particularly when single-season image data were used. This positive effect appeared to be offset by the multi-temporal image data. Additional texture information brought only a minor improvement or a slight decline, although it is well known to be advantageous for object-based classification in general. We conclude that broadband red-edge information derived from conventional multispectral satellite data has the potential to improve space-based crop monitoring. Because the positive or negative effects of texture features for object-based crop classification could barely be interpreted, the relationships between the textual properties and paddy rice crop parameters at the field scale should be further examined in depth.
NASA Astrophysics Data System (ADS)
Birk, Udo; Szczurek, Aleksander; Cremer, Christoph
2017-12-01
Current approaches to overcome the conventional limit of the resolution potential of light microscopy (of about 200 nm for visible light), often suffer from non-linear effects, which render the quantification of the image intensities in the reconstructions difficult, and also affect the quantification of the biological structure under investigation. As an attempt to face these difficulties, we discuss a particular method of localization microscopy which is based on photostable fluorescent dyes. The proposed method can potentially be implemented as a fast alternative for quantitative localization microscopy, circumventing the need for the acquisition of thousands of image frames and complex, highly dye-specific imaging buffers. Although the need for calibration remains in order to extract quantitative data (such as the number of emitters), multispectral approaches are largely facilitated due to the much less stringent requirements on imaging buffers. Furthermore, multispectral acquisitions can be readily obtained using commercial instrumentation such as e.g. the conventional confocal laser scanning microscope.
Cytology 3D structure formation based on optical microscopy images
NASA Astrophysics Data System (ADS)
Pronichev, A. N.; Polyakov, E. V.; Shabalova, I. P.; Djangirova, T. V.; Zaitsev, S. M.
2017-01-01
The article the article is devoted to optimization of the parameters of imaging of biological preparations in optical microscopy using a multispectral camera in visible range of electromagnetic radiation. A model for the image forming of virtual preparations was proposed. The optimum number of layers was determined for the object scan in depth and holistic perception of its switching according to the results of the experiment.
Computerized simulation of color appearance for anomalous trichromats using the multispectral image.
Yaguchi, Hirohisa; Luo, Junyan; Kato, Miharu; Mizokami, Yoko
2018-04-01
Most color simulators for color deficiencies are based on the tristimulus values and are intended to simulate the appearance of an image for dichromats. Statistics show that there are more anomalous trichromats than dichromats. Furthermore, the spectral sensitivities of anomalous cones are different from those of normal cones. Clinically, the types of color defects are characterized through Rayleigh color matching, where the observer matches a spectral yellow to a mixture of spectral red and green. The midpoints of the red/green ratios deviate from a normal trichromat. This means that any simulation based on the tristimulus values defined by a normal trichromat cannot predict the color appearance of anomalous Rayleigh matches. We propose a computerized simulation of the color appearance for anomalous trichromats using multispectral images. First, we assume that anomalous trichromats possess a protanomalous (green shifted) or deuteranomalous (red shifted) pigment instead of a normal (L or M) one. Second, we assume that the luminance will be given by L+M, and red/green and yellow/blue opponent color stimulus values are defined through L-M and (L+M)-S, respectively. Third, equal-energy white will look white for all observers. The spectral sensitivities of the luminance and the two opponent color channels are multiplied by the spectral radiance of each pixel of a multispectral image to give the luminance and opponent color stimulus values of the entire image. In the next stage of color reproduction for normal observers, the luminance and two opponent color channels are transformed into XYZ tristimulus values and then transformed into sRGB to reproduce a final image for anomalous trichromats. The proposed simulation can be used to predict the Rayleigh color matches for anomalous trichromats. We also conducted experiments to evaluate the appearance of simulated images by color deficient observers and verified the reliability of the simulation.
Deblurring sequential ocular images from multi-spectral imaging (MSI) via mutual information.
Lian, Jian; Zheng, Yuanjie; Jiao, Wanzhen; Yan, Fang; Zhao, Bojun
2018-06-01
Multi-spectral imaging (MSI) produces a sequence of spectral images to capture the inner structure of different species, which was recently introduced into ocular disease diagnosis. However, the quality of MSI images can be significantly degraded by motion blur caused by the inevitable saccades and exposure time required for maintaining a sufficiently high signal-to-noise ratio. This degradation may confuse an ophthalmologist, reduce the examination quality, or defeat various image analysis algorithms. We propose an early work specially on deblurring sequential MSI images, which is distinguished from many of the current image deblurring techniques by resolving the blur kernel simultaneously for all the images in an MSI sequence. It is accomplished by incorporating several a priori constraints including the sharpness of the latent clear image, the spatial and temporal smoothness of the blur kernel and the similarity between temporally-neighboring images in MSI sequence. Specifically, we model the similarity between MSI images with mutual information considering the different wavelengths used for capturing different images in MSI sequence. The optimization of the proposed approach is based on a multi-scale framework and stepwise optimization strategy. Experimental results from 22 MSI sequences validate that our approach outperforms several state-of-the-art techniques in natural image deblurring.
Methods and decision making on a Mars rover for identification of fossils
NASA Technical Reports Server (NTRS)
Eberlein, Susan; Yates, Gigi
1989-01-01
A system for automated fusion and interpretation of image data from multiple sensors, including multispectral data from an imaging spectrometer is being developed. Classical artificial intelligence techniques and artificial neural networks are employed to make real time decision based on current input and known scientific goals. Emphasis is placed on identifying minerals which could indicate past life activity or an environment supportive of life. Multispectral data can be used for geological analysis because different minerals have characteristic spectral reflectance in the visible and near infrared range. Classification of each spectrum into a broad class, based on overall spectral shape and locations of absorption bands is possible in real time using artificial neural networks. The goal of the system is twofold: multisensor and multispectral data must be interpreted in real time so that potentially interesting sites can be flagged and investigated in more detail while the rover is near those sites; and the sensed data must be reduced to the most compact form possible without loss of crucial information. Autonomous decision making will allow a rover to achieve maximum scientific benefit from a mission. Both a classical rule based approach and a decision neural network for making real time choices are being considered. Neural nets may work well for adaptive decision making. A neural net can be trained to work in two steps. First, the actual input state is mapped to the closest of a number of memorized states. After weighing the importance of various input parameters, the net produces an output decision based on the matched memory state. Real time, autonomous image data analysis and decision making capabilities are required for achieving maximum scientific benefit from a rover mission. The system under development will enhance the chances of identifying fossils or environments capable of supporting life on Mars
Remote sensing based water-use efficiency evaluation in sub-surface irrigated wine grape vines
NASA Astrophysics Data System (ADS)
Zúñiga, Carlos Espinoza; Khot, Lav R.; Jacoby, Pete; Sankaran, Sindhuja
2016-05-01
Increased water demands have forced agriculture industry to investigate better irrigation management strategies in crop production. Efficient irrigation systems, improved irrigation scheduling, and selection of crop varieties with better water-use efficiencies can aid towards conserving water. In an ongoing experiment carried on in Red Mountain American Viticulture area near Benton City, Washington, subsurface drip irrigation treatments at 30, 60 and 90 cm depth, and 15, 30 and 60% irrigation were applied to satisfy evapotranspiration demand using pulse and continuous irrigation. These treatments were compared to continuous surface irrigation applied at 100% evapotranspiration demand. Thermal infrared and multispectral images were acquired using unmanned aerial vehicle during the growing season. Obtained results indicated no difference in yield among treatments (p<0.05), however there was statistical difference in leaf temperature comparing surface and subsurface irrigation (p<0.05). Normalized vegetation index obtained from the analysis of multispectral images showed statistical difference among treatments when surface and subsurface irrigation methods were compared. Similar differences in vegetation index values were observed, when irrigation rates were compared. Obtained results show the applicability of aerial thermal infrared and multispectral images to characterize plant responses to different irrigation treatments and use of such information in irrigation scheduling or high-throughput selection of water-use efficient crop varieties in plant breeding.
Oximetry using multispectral imaging: theory and application
NASA Astrophysics Data System (ADS)
MacKenzie, Lewis E.; Harvey, Andrew R.
2018-06-01
Multispectral imaging (MSI) is a technique for measurement of blood oxygen saturation in vivo that can be applied using various imaging modalities to provide new insights into physiology and disease development. This tutorial aims to provide a thorough introduction to the theory and application of MSI oximetry for researchers new to the field, whilst also providing detailed information for more experienced researchers. The optical theory underlying two-wavelength oximetry, three-wavelength oximetry, pulse oximetry, and multispectral oximetry algorithms are described in detail. The varied challenges of applying MSI oximetry to in vivo applications are outlined and discussed, covering: the optical properties of blood and tissue, optical paths in blood vessels, tissue auto-fluorescence, oxygen diffusion, and common oximetry artefacts. Essential image processing techniques for MSI are discussed, in particular, image acquisition, image registration strategies, and blood vessel line profile fitting. Calibration and validation strategies for MSI are discussed, including comparison techniques, physiological interventions, and phantoms. The optical principles and unique imaging capabilities of various cutting-edge MSI oximetry techniques are discussed, including photoacoustic imaging, spectroscopic optical coherence tomography, and snapshot MSI.
Multispectral open-air intraoperative fluorescence imaging.
Behrooz, Ali; Waterman, Peter; Vasquez, Kristine O; Meganck, Jeff; Peterson, Jeffrey D; Faqir, Ilias; Kempner, Joshua
2017-08-01
Intraoperative fluorescence imaging informs decisions regarding surgical margins by detecting and localizing signals from fluorescent reporters, labeling targets such as malignant tissues. This guidance reduces the likelihood of undetected malignant tissue remaining after resection, eliminating the need for additional treatment or surgery. The primary challenges in performing open-air intraoperative fluorescence imaging come from the weak intensity of the fluorescence signal in the presence of strong surgical and ambient illumination, and the auto-fluorescence of non-target components, such as tissue, especially in the visible spectral window (400-650 nm). In this work, a multispectral open-air fluorescence imaging system is presented for translational image-guided intraoperative applications, which overcomes these challenges. The system is capable of imaging weak fluorescence signals with nanomolar sensitivity in the presence of surgical illumination. This is done using synchronized fluorescence excitation and image acquisition with real-time background subtraction. Additionally, the system uses a liquid crystal tunable filter for acquisition of multispectral images that are used to spectrally unmix target fluorescence from non-target auto-fluorescence. Results are validated by preclinical studies on murine models and translational canine oncology models.
Hyperspectral and multispectral bioluminescence optical tomography for small animal imaging.
Chaudhari, Abhijit J; Darvas, Felix; Bading, James R; Moats, Rex A; Conti, Peter S; Smith, Desmond J; Cherry, Simon R; Leahy, Richard M
2005-12-07
For bioluminescence imaging studies in small animals, it is important to be able to accurately localize the three-dimensional (3D) distribution of the underlying bioluminescent source. The spectrum of light produced by the source that escapes the subject varies with the depth of the emission source because of the wavelength-dependence of the optical properties of tissue. Consequently, multispectral or hyperspectral data acquisition should help in the 3D localization of deep sources. In this paper, we describe a framework for fully 3D bioluminescence tomographic image acquisition and reconstruction that exploits spectral information. We describe regularized tomographic reconstruction techniques that use semi-infinite slab or FEM-based diffusion approximations of photon transport through turbid media. Singular value decomposition analysis was used for data dimensionality reduction and to illustrate the advantage of using hyperspectral rather than achromatic data. Simulation studies in an atlas-mouse geometry indicated that sub-millimeter resolution may be attainable given accurate knowledge of the optical properties of the animal. A fixed arrangement of mirrors and a single CCD camera were used for simultaneous acquisition of multispectral imaging data over most of the surface of the animal. Phantom studies conducted using this system demonstrated our ability to accurately localize deep point-like sources and show that a resolution of 1.5 to 2.2 mm for depths up to 6 mm can be achieved. We also include an in vivo study of a mouse with a brain tumour expressing firefly luciferase. Co-registration of the reconstructed 3D bioluminescent image with magnetic resonance images indicated good anatomical localization of the tumour.
Software For Tie-Point Registration Of SAR Data
NASA Technical Reports Server (NTRS)
Rignot, Eric; Dubois, Pascale; Okonek, Sharon; Van Zyl, Jacob; Burnette, Fred; Borgeaud, Maurice
1995-01-01
SAR-REG software package registers synthetic-aperture-radar (SAR) image data to common reference frame based on manual tie-pointing. Image data can be in binary, integer, floating-point, or AIRSAR compressed format. For example, with map of soil characteristics, vegetation map, digital elevation map, or SPOT multispectral image, as long as user can generate binary image to be used by tie-pointing routine and data are available in one of the previously mentioned formats. Written in FORTRAN 77.
Application and evaluation of ISVR method in QuickBird image fusion
NASA Astrophysics Data System (ADS)
Cheng, Bo; Song, Xiaolu
2014-05-01
QuickBird satellite images are widely used in many fields, and applications have put forward high requirements for the integration of the spatial information and spectral information of the imagery. A fusion method for high resolution remote sensing images based on ISVR is identified in this study. The core principle of ISVS is taking the advantage of radicalization targeting to remove the effect of different gain and error of satellites' sensors. Transformed from DN to radiance, the multi-spectral image's energy is used to simulate the panchromatic band. The linear regression analysis is carried through the simulation process to find a new synthetically panchromatic image, which is highly linearly correlated to the original panchromatic image. In order to evaluate, test and compare the algorithm results, this paper used ISVR and other two different fusion methods to give a comparative study of the spatial information and spectral information, taking the average gradient and the correlation coefficient as an indicator. Experiments showed that this method could significantly improve the quality of fused image, especially in preserving spectral information, to maximize the spectral information of original multispectral images, while maintaining abundant spatial information.
Martínez-Domingo, Miguel Ángel; Valero, Eva M; Hernández-Andrés, Javier; Tominaga, Shoji; Horiuchi, Takahiko; Hirai, Keita
2017-11-27
We propose a method for the capture of high dynamic range (HDR), multispectral (MS), polarimetric (Pol) images of indoor scenes using a liquid crystal tunable filter (LCTF). We have included the adaptive exposure estimation (AEE) method to fully automatize the capturing process. We also propose a pre-processing method which can be applied for the registration of HDR images after they are already built as the result of combining different low dynamic range (LDR) images. This method is applied to ensure a correct alignment of the different polarization HDR images for each spectral band. We have focused our efforts in two main applications: object segmentation and classification into metal and dielectric classes. We have simplified the segmentation using mean shift combined with cluster averaging and region merging techniques. We compare the performance of our segmentation with that of Ncut and Watershed methods. For the classification task, we propose to use information not only in the highlight regions but also in their surrounding area, extracted from the degree of linear polarization (DoLP) maps. We present experimental results which proof that the proposed image processing pipeline outperforms previous techniques developed specifically for MSHDRPol image cubes.
NASA Astrophysics Data System (ADS)
Hinnrichs, Michele
2012-06-01
Using diffractive micro-lenses configured in an array and placed in close proximity to the focal plane array will enable a small compact simultaneous multispectral imaging camera. This approach can be applied to spectral regions from the ultraviolet (UV) to the long-wave infrared (LWIR). The number of simultaneously imaged spectral bands is determined by the number of individually configured diffractive optical micro-lenses (lenslet) in the array. Each lenslet images at a different wavelength determined by the blaze and set at the time of manufacturing based on application. In addition, modulation of the focal length of the lenslet array with piezoelectric or electro-static actuation will enable spectral band fill-in allowing hyperspectral imaging. Using the lenslet array with dual-band detectors will increase the number of simultaneous spectral images by a factor of two when utilizing multiple diffraction orders. Configurations and concept designs will be presented for detection application for biological/chemical agents, buried IED's and reconnaissance. The simultaneous detection of multiple spectral images in a single frame of data enhances the image processing capability by eliminating temporal differences between colors and enabling a handheld instrument that is insensitive to motion.
NASA Astrophysics Data System (ADS)
Deng, S.; Katoh, M.; Takenaka, Y.; Cheung, K.; Ishii, A.; Fujii, N.; Gao, T.
2017-10-01
This study attempted to classify three coniferous and ten broadleaved tree species by combining airborne laser scanning (ALS) data and multispectral images. The study area, located in Nagano, central Japan, is within the broadleaved forests of the Afan Woodland area. A total of 235 trees were surveyed in 2016, and we recorded the species, DBH, and tree height. The geographical position of each tree was collected using a Global Navigation Satellite System (GNSS) device. Tree crowns were manually detected using GNSS position data, field photographs, true-color orthoimages with three bands (red-green-blue, RGB), 3D point clouds, and a canopy height model derived from ALS data. Then a total of 69 features, including 27 image-based and 42 point-based features, were extracted from the RGB images and the ALS data to classify tree species. Finally, the detected tree crowns were classified into two classes for the first level (coniferous and broadleaved trees), four classes for the second level (Pinus densiflora, Larix kaempferi, Cryptomeria japonica, and broadleaved trees), and 13 classes for the third level (three coniferous and ten broadleaved species), using the 27 image-based features, 42 point-based features, all 69 features, and the best combination of features identified using a neighborhood component analysis algorithm, respectively. The overall classification accuracies reached 90 % at the first and second levels but less than 60 % at the third level. The classifications using the best combinations of features had higher accuracies than those using the image-based and point-based features and the combination of all of the 69 features.
Bautista, Pinky A; Yagi, Yukako
2012-05-01
Hematoxylin and eosin (H&E) stain is currently the most popular for routine histopathology staining. Special and/or immuno-histochemical (IHC) staining is often requested to further corroborate the initial diagnosis on H&E stained tissue sections. Digital simulation of staining (or digital staining) can be a very valuable tool to produce the desired stained images from the H&E stained tissue sections instantaneously. We present an approach to digital staining of histopathology multispectral images by combining the effects of spectral enhancement and spectral transformation. Spectral enhancement is accomplished by shifting the N-band original spectrum of the multispectral pixel with the weighted difference between the pixel's original and estimated spectrum; the spectrum is estimated using M < N principal component (PC) vectors. The pixel's enhanced spectrum is transformed to the spectral configuration associated to its reaction to a specific stain by utilizing an N × N transformation matrix, which is derived through application of least mean squares method to the enhanced and target spectral transmittance samples of the different tissue components found in the image. Results of our experiments on the digital conversion of an H&E stained multispectral image to its Masson's trichrome stained equivalent show the viability of the method.
Kainerstorfer, Jana M.; Polizzotto, Mark N.; Uldrick, Thomas S.; Rahman, Rafa; Hassan, Moinuddin; Najafizadeh, Laleh; Ardeshirpour, Yasaman; Wyvill, Kathleen M.; Aleman, Karen; Smith, Paul D.; Yarchoan, Robert; Gandjbakhche, Amir H.
2013-01-01
Diffuse multi-spectral imaging has been evaluated as a potential non-invasive marker of tumor response. Multi-spectral images of Kaposi sarcoma skin lesions were taken over the course of treatment, and blood volume and oxygenation concentration maps were obtained through principal component analysis (PCA) of the data. These images were compared with clinical and pathological responses determined by conventional means. We demonstrate that cutaneous lesions have increased blood volume concentration and that changes in this parameter are a reliable indicator of treatment efficacy, differentiating responders and non-responders. Blood volume decreased by at least 20% in all lesions that responded by clinical criteria and increased in the two lesions that did not respond clinically. Responses as assessed by multi-spectral imaging also generally correlated with overall patient clinical response assessment, were often detectable earlier in the course of therapy, and are less subject to observer variability than conventional clinical assessment. Tissue oxygenation was more variable, with lesions often showing decreased oxygenation in the center surrounded by a zone of increased oxygenation. This technique could potentially be a clinically useful supplement to existing response assessment in KS, providing an early, quantitative, and non-invasive marker of treatment effect. PMID:24386302
NASA Astrophysics Data System (ADS)
Bittel, Amy M.; Saldivar, Isaac S.; Nan, Xiaolin; Gibbs, Summer L.
2016-02-01
Single-molecule localization microscopy (SMLM) utilizes photoswitchable fluorophores to detect biological entities with 10-20 nm resolution. Multispectral superresolution microscopy (MSSRM) extends SMLM functionality by improving its spectral resolution up to 5 fold facilitating imaging of multicomponent cellular structures or signaling pathways. Current commercial fluorophores are not ideal for MSSRM as they are not designed to photoswitch and do not adequately cover the visible and far-red spectral regions required for MSSRM imaging. To obtain optimal MSSRM spatial and spectral resolution, fluorophores with narrow emission spectra and controllable photoswitching properties are necessary. Herein, a library of BODIPY-based fluorophores was synthesized and characterized to create optimal photoswitchable fluorophores for MSSRM. BODIPY was chosen as the core structure as it is photostable, has high quantum yield, and controllable photoswitching. The BODIPY core was modified through the addition of various aromatic moieties, resulting in a spectrally diverse library. Photoswitching properties were characterized using a novel polyvinyl alcohol (PVA) based film methodology to isolate single molecules. The PVA film methodology enabled photoswitching assessment without the need for protein conjugation, greatly improving screening efficiency of the BODIPY library. Additionally, image buffer conditions were optimized for the BODIPY-based fluorophores through systematic testing of oxygen scavenger systems, redox components, and additives. Through screening the photoswitching properties of BODIPY-based compounds in PVA films with optimized imaging buffer we identified novel fluorophores well suited for SMLM and MSSRM.
Qualitative evaluations and comparisons of six night-vision colorization methods
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Reese, Kristopher; Blasch, Erik; McManamon, Paul
2013-05-01
Current multispectral night vision (NV) colorization techniques can manipulate images to produce colorized images that closely resemble natural scenes. The colorized NV images can enhance human perception by improving observer object classification and reaction times especially for low light conditions. This paper focuses on the qualitative (subjective) evaluations and comparisons of six NV colorization methods. The multispectral images include visible (Red-Green- Blue), near infrared (NIR), and long wave infrared (LWIR) images. The six colorization methods are channel-based color fusion (CBCF), statistic matching (SM), histogram matching (HM), joint-histogram matching (JHM), statistic matching then joint-histogram matching (SM-JHM), and the lookup table (LUT). Four categries of quality measurements are used for the qualitative evaluations, which are contrast, detail, colorfulness, and overall quality. The score of each measurement is rated from 1 to 3 scale to represent low, average, and high quality, respectively. Specifically, high contrast (of rated score 3) means an adequate level of brightness and contrast. The high detail represents high clarity of detailed contents while maintaining low artifacts. The high colorfulness preserves more natural colors (i.e., closely resembles the daylight image). Overall quality is determined from the NV image compared to the reference image. Nine sets of multispectral NV images were used in our experiments. For each set, the six colorized NV images (produced from NIR and LWIR images) are concurrently presented to users along with the reference color (RGB) image (taken at daytime). A total of 67 subjects passed a screening test ("Ishihara Color Blindness Test") and were asked to evaluate the 9-set colorized images. The experimental results showed the quality order of colorization methods from the best to the worst: CBCF < SM < SM-JHM < LUT < JHM < HM. It is anticipated that this work will provide a benchmark for NV colorization and for quantitative evaluation using an objective metric such as objective evaluation index (OEI).
NASA Astrophysics Data System (ADS)
Taruttis, Adrian; Herzog, Eva; Razansky, Daniel; Ntziachristos, Vasilis
2011-03-01
Multispectral Optoacoustic Tomography (MSOT) is an emerging technique for high resolution macroscopic imaging with optical and molecular contrast. We present cardiovascular imaging results from a multi-element real-time MSOT system recently developed for studies on small animals. Anatomical features relevant to cardiovascular disease, such as the carotid arteries, the aorta and the heart, are imaged in mice. The system's fast acquisition time, in tens of microseconds, allows images free of motion artifacts from heartbeat and respiration. Additionally, we present in-vivo detection of optical imaging agents, gold nanorods, at high spatial and temporal resolution, paving the way for molecular imaging applications.
Airborne multispectral detection of regrowth cotton fields
NASA Astrophysics Data System (ADS)
Westbrook, John K.; Suh, Charles P.-C.; Yang, Chenghai; Lan, Yubin; Eyster, Ritchie S.
2015-01-01
Effective methods are needed for timely areawide detection of regrowth cotton plants because boll weevils (a quarantine pest) can feed and reproduce on these plants beyond the cotton production season. Airborne multispectral images of regrowth cotton plots were acquired on several dates after three shredding (i.e., stalk destruction) dates. Linear spectral unmixing (LSU) classification was applied to high-resolution airborne multispectral images of regrowth cotton plots to estimate the minimum detectable size and subsequent growth of plants. We found that regrowth cotton fields can be identified when the mean plant width is ˜0.2 m for an image resolution of 0.1 m. LSU estimates of canopy cover of regrowth cotton plots correlated well (r2=0.81) with the ratio of mean plant width to row spacing, a surrogate measure of plant canopy cover. The height and width of regrowth plants were both well correlated (r2=0.94) with accumulated degree-days after shredding. The results will help boll weevil eradication program managers use airborne multispectral images to detect and monitor the regrowth of cotton plants after stalk destruction, and identify fields that may require further inspection and mitigation of boll weevil infestations.
Perceptual evaluation of color transformed multispectral imagery
NASA Astrophysics Data System (ADS)
Toet, Alexander; de Jong, Michael J.; Hogervorst, Maarten A.; Hooge, Ignace T. C.
2014-04-01
Color remapping can give multispectral imagery a realistic appearance. We assessed the practical value of this technique in two observer experiments using monochrome intensified (II) and long-wave infrared (IR) imagery, and color daylight (REF) and fused multispectral (CF) imagery. First, we investigated the amount of detail observers perceive in a short timespan. REF and CF imagery yielded the highest precision and recall measures, while II and IR imagery yielded significantly lower values. This suggests that observers have more difficulty in extracting information from monochrome than from color imagery. Next, we measured eye fixations during free image exploration. Although the overall fixation behavior was similar across image modalities, the order in which certain details were fixated varied. Persons and vehicles were typically fixated first in REF, CF, and IR imagery, while they were fixated later in II imagery. In some cases, color remapping II imagery and fusion with IR imagery restored the fixation order of these image details. We conclude that color remapping can yield enhanced scene perception compared to conventional monochrome nighttime imagery, and may be deployed to tune multispectral image representations such that the resulting fixation behavior resembles the fixation behavior corresponding to daylight color imagery.
The Multispectral Imaging Science Working Group. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Cox, S. C. (Editor)
1982-01-01
Results of the deliberations of the six multispectral imaging science working groups (Botany, Geography, Geology, Hydrology, Imaging Science and Information Science) are summarized. Consideration was given to documenting the current state of knowledge in terrestrial remote sensing without the constraints of preconceived concepts such as possible band widths, number of bands, and radiometric or spatial resolutions of present or future systems. The findings of each working group included a discussion of desired capabilities and critical developmental issues.
Predicting Electron Population Characteristics in 2-D Using Multispectral Ground-Based Imaging
NASA Astrophysics Data System (ADS)
Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Jahn, Jorg-Micha
2018-01-01
Ground-based imaging and in situ sounding rocket data are compared to electron transport modeling for an active inverted-V type auroral event. The Ground-to-Rocket Electrodynamics-Electrons Correlative Experiment (GREECE) mission successfully launched from Poker Flat, Alaska, on 3 March 2014 at 11:09:50 UT and reached an apogee of approximately 335 km over the aurora. Multiple ground-based electron-multiplying charge-coupled device (EMCCD) imagers were positioned at Venetie, Alaska, and aimed toward magnetic zenith. The imagers observed the intensity of different auroral emission lines (427.8, 557.7, and 844.6 nm) at the magnetic foot point of the rocket payload. Emission line intensity data are correlated with electron characteristics measured by the GREECE onboard electron spectrometer. A modified version of the GLobal airglOW (GLOW) model is used to estimate precipitating electron characteristics based on optical emissions. GLOW predicted the electron population characteristics with 20% error given the observed spectral intensities within 10° of magnetic zenith. Predictions are within 30% of the actual values within 20° of magnetic zenith for inverted-V-type aurora. Therefore, it is argued that this technique can be used, at least in certain types of aurora, such as the inverted-V type presented here, to derive 2-D maps of electron characteristics. These can then be used to further derive 2-D maps of ionospheric parameters as a function of time, based solely on multispectral optical imaging data.
Retinal oxygen saturation evaluation by multi-spectral fundus imaging
NASA Astrophysics Data System (ADS)
Khoobehi, Bahram; Ning, Jinfeng; Puissegur, Elise; Bordeaux, Kimberly; Balasubramanian, Madhusudhanan; Beach, James
2007-03-01
Purpose: To develop a multi-spectral method to measure oxygen saturation of the retina in the human eye. Methods: Five Cynomolgus monkeys with normal eyes were anesthetized with intramuscular ketamine/xylazine and intravenous pentobarbital. Multi-spectral fundus imaging was performed in five monkeys with a commercial fundus camera equipped with a liquid crystal tuned filter in the illumination light path and a 16-bit digital camera. Recording parameters were controlled with software written specifically for the application. Seven images at successively longer oxygen-sensing wavelengths were recorded within 4 seconds. Individual images for each wavelength were captured in less than 100 msec of flash illumination. Slightly misaligned images of separate wavelengths due to slight eye motion were registered and corrected by translational and rotational image registration prior to analysis. Numerical values of relative oxygen saturation of retinal arteries and veins and the underlying tissue in between the artery/vein pairs were evaluated by an algorithm previously described, but which is now corrected for blood volume from averaged pixels (n > 1000). Color saturation maps were constructed by applying the algorithm at each image pixel using a Matlab script. Results: Both the numerical values of relative oxygen saturation and the saturation maps correspond to the physiological condition, that is, in a normal retina, the artery is more saturated than the tissue and the tissue is more saturated than the vein. With the multi-spectral fundus camera and proper registration of the multi-wavelength images, we were able to determine oxygen saturation in the primate retinal structures on a tolerable time scale which is applicable to human subjects. Conclusions: Seven wavelength multi-spectral imagery can be used to measure oxygen saturation in retinal artery, vein, and tissue (microcirculation). This technique is safe and can be used to monitor oxygen uptake in humans. This work is original and is not under consideration for publication elsewhere.
Evaluation of Chilling Injury in Mangoes Using Multispectral Imaging.
Hashim, Norhashila; Onwude, Daniel I; Osman, Muhamad Syafiq
2018-05-01
Commodities originating from tropical and subtropical climes are prone to chilling injury (CI). This injury could affect the quality and marketing potential of mango after harvest. This will later affect the quality of the produce and subsequent consumer acceptance. In this study, the appearance of CI symptoms in mango was evaluated non-destructively using multispectral imaging. The fruit were stored at 4 °C to induce CI and 12 °C to preserve the quality of the control samples for 4 days before they were taken out and stored at ambient temperature for 24 hr. Measurements using multispectral imaging and standard reference methods were conducted before and after storage. The performance of multispectral imaging was compared using standard reference properties including moisture content (MC), total soluble solids (TSS) content, firmness, pH, and color. Least square support vector machine (LS-SVM) combined with principal component analysis (PCA) were used to discriminate CI samples with those of control and before storage, respectively. The statistical results demonstrated significant changes in the reference quality properties of samples before and after storage. The results also revealed that multispectral parameters have a strong correlation with the reference parameters of L * , a * , TSS, and MC. The MC and L * were found to be the best reference parameters in identifying the severity of CI in mangoes. PCA and LS-SVM analysis indicated that the fruit were successfully classified into their categories, that is, before storage, control, and CI. This indicated that the multispectral imaging technique is feasible for detecting CI in mangoes during postharvest storage and processing. This paper demonstrates a fast, easy, and accurate method of identifying the effect of cold storage on mango, nondestructively. The method presented in this paper can be used industrially to efficiently differentiate different fruits from each other after low temperature storage. © 2018 Institute of Food Technologists®.
Land-cover classification in a moist tropical region of Brazil with Landsat TM imagery.
Li, Guiying; Lu, Dengsheng; Moran, Emilio; Hetrick, Scott
2011-01-01
This research aims to improve land-cover classification accuracy in a moist tropical region in Brazil by examining the use of different remote sensing-derived variables and classification algorithms. Different scenarios based on Landsat Thematic Mapper (TM) spectral data and derived vegetation indices and textural images, and different classification algorithms - maximum likelihood classification (MLC), artificial neural network (ANN), classification tree analysis (CTA), and object-based classification (OBC), were explored. The results indicated that a combination of vegetation indices as extra bands into Landsat TM multispectral bands did not improve the overall classification performance, but the combination of textural images was valuable for improving vegetation classification accuracy. In particular, the combination of both vegetation indices and textural images into TM multispectral bands improved overall classification accuracy by 5.6% and kappa coefficient by 6.25%. Comparison of the different classification algorithms indicated that CTA and ANN have poor classification performance in this research, but OBC improved primary forest and pasture classification accuracies. This research indicates that use of textural images or use of OBC are especially valuable for improving the vegetation classes such as upland and liana forest classes having complex stand structures and having relatively large patch sizes.
Land-cover classification in a moist tropical region of Brazil with Landsat TM imagery
LI, GUIYING; LU, DENGSHENG; MORAN, EMILIO; HETRICK, SCOTT
2011-01-01
This research aims to improve land-cover classification accuracy in a moist tropical region in Brazil by examining the use of different remote sensing-derived variables and classification algorithms. Different scenarios based on Landsat Thematic Mapper (TM) spectral data and derived vegetation indices and textural images, and different classification algorithms – maximum likelihood classification (MLC), artificial neural network (ANN), classification tree analysis (CTA), and object-based classification (OBC), were explored. The results indicated that a combination of vegetation indices as extra bands into Landsat TM multispectral bands did not improve the overall classification performance, but the combination of textural images was valuable for improving vegetation classification accuracy. In particular, the combination of both vegetation indices and textural images into TM multispectral bands improved overall classification accuracy by 5.6% and kappa coefficient by 6.25%. Comparison of the different classification algorithms indicated that CTA and ANN have poor classification performance in this research, but OBC improved primary forest and pasture classification accuracies. This research indicates that use of textural images or use of OBC are especially valuable for improving the vegetation classes such as upland and liana forest classes having complex stand structures and having relatively large patch sizes. PMID:22368311
NASA Astrophysics Data System (ADS)
Chernomyrdin, Nikita V.; Zaytsev, Kirill I.; Lesnichaya, Anastasiya D.; Kudrin, Konstantin G.; Cherkasova, Olga P.; Kurlov, Vladimir N.; Shikunova, Irina A.; Perchik, Alexei V.; Yurchenko, Stanislav O.; Reshetov, Igor V.
2016-09-01
In present paper, an ability to differentiate basal cell carcinoma (BCC) and healthy skin by combining multi-spectral autofluorescence imaging, principle component analysis (PCA), and linear discriminant analysis (LDA) has been demonstrated. For this purpose, the experimental setup, which includes excitation and detection branches, has been assembled. The excitation branch utilizes a mercury arc lamp equipped with a 365-nm narrow-linewidth excitation filter, a beam homogenizer, and a mechanical chopper. The detection branch employs a set of bandpass filters with the central wavelength of spectral transparency of λ = 400, 450, 500, and 550 nm, and a digital camera. The setup has been used to study three samples of freshly excised BCC. PCA and LDA have been implemented to analyze the data of multi-spectral fluorescence imaging. Observed results of this pilot study highlight the advantages of proposed imaging technique for skin cancer diagnosis.
Mapping lipid and collagen by multispectral photoacoustic imaging of chemical bond vibration
NASA Astrophysics Data System (ADS)
Wang, Pu; Wang, Ping; Wang, Han-Wei; Cheng, Ji-Xin
2012-09-01
Photoacoustic microscopy using vibrational overtone absorption as a contrast mechanism allows bond-selective imaging of deep tissues. Due to the spectral similarity of molecules in the region of overtone vibration, it is difficult to interrogate chemical components using photoacoustic signal at single excitation wavelength. Here we demonstrate that lipids and collagen, two critical markers for many kinds of diseases, can be distinguished by multispectral photoacoustic imaging of the first overtone of C-H bond. A phantom consisting of rat-tail tendon and fat was constructed to demonstrate this technique. Wavelengths between 1650 and 1850 nm were scanned to excite both the first overtone and combination bands of C-H bonds. B-scan multispectral photoacoustic images, in which each pixel contains a spectrum, were analyzed by a multivariate curve resolution-alternating least squares algorithm to recover the spatial distribution of collagen and lipids in the phantom.
Mapping lipid and collagen by multispectral photoacoustic imaging of chemical bond vibration.
Wang, Pu; Wang, Ping; Wang, Han-Wei; Cheng, Ji-Xin
2012-09-01
Photoacoustic microscopy using vibrational overtone absorption as a contrast mechanism allows bond-selective imaging of deep tissues. Due to the spectral similarity of molecules in the region of overtone vibration, it is difficult to interrogate chemical components using photoacoustic signal at single excitation wavelength. Here we demonstrate that lipids and collagen, two critical markers for many kinds of diseases, can be distinguished by multispectral photoacoustic imaging of the first overtone of C-H bond. A phantom consisting of rat-tail tendon and fat was constructed to demonstrate this technique. Wavelengths between 1650 and 1850 nm were scanned to excite both the first overtone and combination bands of C-H bonds. B-scan multispectral photoacoustic images, in which each pixel contains a spectrum, were analyzed by a multivariate curve resolution-alternating least squares algorithm to recover the spatial distribution of collagen and lipids in the phantom.
NASA Technical Reports Server (NTRS)
Settle, M.; Adams, J.
1982-01-01
Improved orbital imaging capabilities from the standpoint of different scientific disciplines, such as geology, botany, hydrology, and geography were evaluated. A discussion on how geologists might exploit the anticipated measurement capabilities of future orbital imaging systems to discriminate and characterize different types of geologic materials exposed at the Earth's surface is presented. Principle objectives are to summarize past accomplishments in the use of multispectral imaging techniques for lithologic mapping; to identify critical gaps in earlier research efforts that currently limit the ability to extract useful information about the physical and chemical characteristics of geological materials from orbital multispectral surveys; and to define major thresholds, resolution and sensitivity within the visible and infrared portions of the electromagnetic spectrum which, if achieved would result in significant improvement in our ability to discriminate and characterize different geological materials exposed at the Earth's surface.
Optical design of athermal, multispectral, radial GRIN lenses
NASA Astrophysics Data System (ADS)
Boyd, Andrew M.
2017-05-01
Military infrared systems generally must exhibit stable optical performance over a wide operating temperature range. We present a model for the first-order optical design of radial gradient-index systems, based on a form of the thermo-optic glass coefficient adapted to inhomogeneous material combinations. We find that GRIN components can significantly reduce the optical power balance of athermal, achromatic systems, which introduces the scope for a new class of broadband infrared imaging solutions. This novel first-order modelling technique is used to generate a starting point for optimisation of a SWIR/LWIR multispectral optical design.
High Throughput Multispectral Image Processing with Applications in Food Science.
Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John
2015-01-01
Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.
Automated simultaneous multiple feature classification of MTI data
NASA Astrophysics Data System (ADS)
Harvey, Neal R.; Theiler, James P.; Balick, Lee K.; Pope, Paul A.; Szymanski, John J.; Perkins, Simon J.; Porter, Reid B.; Brumby, Steven P.; Bloch, Jeffrey J.; David, Nancy A.; Galassi, Mark C.
2002-08-01
Los Alamos National Laboratory has developed and demonstrated a highly capable system, GENIE, for the two-class problem of detecting a single feature against a background of non-feature. In addition to the two-class case, however, a commonly encountered remote sensing task is the segmentation of multispectral image data into a larger number of distinct feature classes or land cover types. To this end we have extended our existing system to allow the simultaneous classification of multiple features/classes from multispectral data. The technique builds on previous work and its core continues to utilize a hybrid evolutionary-algorithm-based system capable of searching for image processing pipelines optimized for specific image feature extraction tasks. We describe the improvements made to the GENIE software to allow multiple-feature classification and describe the application of this system to the automatic simultaneous classification of multiple features from MTI image data. We show the application of the multiple-feature classification technique to the problem of classifying lava flows on Mauna Loa volcano, Hawaii, using MTI image data and compare the classification results with standard supervised multiple-feature classification techniques.
SPEKTROP DPU: optoelectronic platform for fast multispectral imaging
NASA Astrophysics Data System (ADS)
Graczyk, Rafal; Sitek, Piotr; Stolarski, Marcin
2010-09-01
In recent years it easy to spot and increasing need of high-quality Earth imaging in airborne and space applications. This is due fact that government and local authorities urge for up to date topological data for administrative purposes. On the other hand, interest in environmental sciences, push for ecological approach, efficient agriculture and forests management are also heavily supported by Earth images in various resolutions and spectral ranges. "SPEKTROP DPU: Opto-electronic platform for fast multi-spectral imaging" paper describes architectural datails of data processing unit, part of universal and modular platform that provides high quality imaging functionality in aerospace applications.
Applying reconfigurable hardware to the analysis of multispectral and hyperspectral imagery
NASA Astrophysics Data System (ADS)
Leeser, Miriam E.; Belanovic, Pavle; Estlick, Michael; Gokhale, Maya; Szymanski, John J.; Theiler, James P.
2002-01-01
Unsupervised clustering is a powerful technique for processing multispectral and hyperspectral images. Last year, we reported on an implementation of k-means clustering for multispectral images. Our implementation in reconfigurable hardware processed 10 channel multispectral images two orders of magnitude faster than a software implementation of the same algorithm. The advantage of using reconfigurable hardware to accelerate k-means clustering is clear; the disadvantage is the hardware implementation worked for one specific dataset. It is a non-trivial task to change this implementation to handle a dataset with different number of spectral channels, bits per spectral channel, or number of pixels; or to change the number of clusters. These changes required knowledge of the hardware design process and could take several days of a designer's time. Since multispectral data sets come in many shapes and sizes, being able to easily change the k-means implementation for these different data sets is important. For this reason, we have developed a parameterized implementation of the k-means algorithm. Our design is parameterized by the number of pixels in an image, the number of channels per pixel, and the number of bits per channel as well as the number of clusters. These parameters can easily be changed in a few minutes by someone not familiar with the design process. The resulting implementation is very close in performance to the original hardware implementation. It has the added advantage that the parameterized design compiles approximately three times faster than the original.
NASA Astrophysics Data System (ADS)
Piermattei, Livia; Bozzi, Carlo Alberto; Mancini, Adriano; Tassetti, Anna Nora; Karel, Wilfried; Pfeifer, Norbert
2017-04-01
Unmanned aerial vehicles (UAVs) in combination with consumer grade cameras have become standard tools for photogrammetric applications and surveying. The recent generation of multispectral, cost-efficient and lightweight cameras has fostered a breakthrough in the practical application of UAVs for precision agriculture. For this application, multispectral cameras typically use Green, Red, Red-Edge (RE) and Near Infrared (NIR) wavebands to capture both visible and invisible images of crops and vegetation. These bands are very effective for deriving characteristics like soil productivity, plant health and overall growth. However, the quality of results is affected by the sensor architecture, the spatial and spectral resolutions, the pattern of image collection, and the processing of the multispectral images. In particular, collecting data with multiple sensors requires an accurate spatial co-registration of the various UAV image datasets. Multispectral processed data in precision agriculture are mainly presented as orthorectified mosaics used to export information maps and vegetation indices. This work aims to investigate the acquisition parameters and processing approaches of this new type of image data in order to generate orthoimages using different sensors and UAV platforms. Within our experimental area we placed a grid of artificial targets, whose position was determined with differential global positioning system (dGPS) measurements. Targets were used as ground control points to georeference the images and as checkpoints to verify the accuracy of the georeferenced mosaics. The primary aim is to present a method for the spatial co-registration of visible, Red-Edge, and NIR image sets. To demonstrate the applicability and accuracy of our methodology, multi-sensor datasets were collected over the same area and approximately at the same time using the fixed-wing UAV senseFly "eBee". The images were acquired with the camera Canon S110 RGB, the multispectral cameras Canon S110 NIR and S110 RE and with the multi-camera system Parrot Sequoia, which is composed of single-band cameras (Green, Red, Red Edge, NIR and RGB). Imagery from each sensor was georeferenced and mosaicked with the commercial software Agisoft PhotoScan Pro and different approaches for image orientation were compared. To assess the overall spatial accuracy of each dataset the root mean square error was computed between check point coordinates measured with dGPS and coordinates retrieved from georeferenced image mosaics. Additionally, image datasets from different UAV platforms (i.e. DJI Phantom 4Pro, DJI Phantom 3 professional, and DJI Inspire 1 Pro) were acquired over the same area and the spatial accuracy of the orthoimages was evaluated.
NASA Technical Reports Server (NTRS)
Jaggi, S.
1993-01-01
A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.
Fingerprint enhancement using a multispectral sensor
NASA Astrophysics Data System (ADS)
Rowe, Robert K.; Nixon, Kristin A.
2005-03-01
The level of performance of a biometric fingerprint sensor is critically dependent on the quality of the fingerprint images. One of the most common types of optical fingerprint sensors relies on the phenomenon of total internal reflectance (TIR) to generate an image. Under ideal conditions, a TIR fingerprint sensor can produce high-contrast fingerprint images with excellent feature definition. However, images produced by the same sensor under conditions that include dry skin, dirt on the skin, and marginal contact between the finger and the sensor, are likely to be severely degraded. This paper discusses the use of multispectral sensing as a means to collect additional images with new information about the fingerprint that can significantly augment the system performance under both normal and adverse sample conditions. In the context of this paper, "multispectral sensing" is used to broadly denote a collection of images taken under different illumination conditions: different polarizations, different illumination/detection configurations, as well as different wavelength illumination. Results from three small studies using an early-stage prototype of the multispectral-TIR (MTIR) sensor are presented along with results from the corresponding TIR data. The first experiment produced data from 9 people, 4 fingers from each person and 3 measurements per finger under "normal" conditions. The second experiment provided results from a study performed to test the relative performance of TIR and MTIR images when taken under extreme dry and dirty conditions. The third experiment examined the case where the area of contact between the finger and sensor is greatly reduced.
3D widefield light microscope image reconstruction without dyes
NASA Astrophysics Data System (ADS)
Larkin, S.; Larson, J.; Holmes, C.; Vaicik, M.; Turturro, M.; Jurkevich, A.; Sinha, S.; Ezashi, T.; Papavasiliou, G.; Brey, E.; Holmes, T.
2015-03-01
3D image reconstruction using light microscope modalities without exogenous contrast agents is proposed and investigated as an approach to produce 3D images of biological samples for live imaging applications. Multimodality and multispectral imaging, used in concert with this 3D optical sectioning approach is also proposed as a way to further produce contrast that could be specific to components in the sample. The methods avoid usage of contrast agents. Contrast agents, such as fluorescent or absorbing dyes, can be toxic to cells or alter cell behavior. Current modes of producing 3D image sets from a light microscope, such as 3D deconvolution algorithms and confocal microscopy generally require contrast agents. Zernike phase contrast (ZPC), transmitted light brightfield (TLB), darkfield microscopy and others can produce contrast without dyes. Some of these modalities have not previously benefitted from 3D image reconstruction algorithms, however. The 3D image reconstruction algorithm is based on an underlying physical model of scattering potential, expressed as the sample's 3D absorption and phase quantities. The algorithm is based upon optimizing an objective function - the I-divergence - while solving for the 3D absorption and phase quantities. Unlike typical deconvolution algorithms, each microscope modality, such as ZPC or TLB, produces two output image sets instead of one. Contrast in the displayed image and 3D renderings is further enabled by treating the multispectral/multimodal data as a feature set in a mathematical formulation that uses the principal component method of statistics.
Selection of optimal multispectral imaging system parameters for small joint arthritis detection
NASA Astrophysics Data System (ADS)
Dolenec, Rok; Laistler, Elmar; Stergar, Jost; Milanic, Matija
2018-02-01
Early detection and treatment of arthritis is essential for a successful outcome of the treatment, but it has proven to be very challenging with existing diagnostic methods. Novel methods based on the optical imaging of the affected joints are becoming an attractive alternative. A non-contact multispectral imaging (MSI) system for imaging of small joints of human hands and feet is being developed. In this work, a numerical simulation of the MSI system is presented. The purpose of the simulation is to determine the optimal design parameters. Inflamed and unaffected human joint models were constructed with a realistic geometry and tissue distributions, based on a MRI scan of a human finger with a spatial resolution of 0.2 mm. The light transport simulation is based on a weighted-photon 3D Monte Carlo method utilizing CUDA GPU acceleration. An uniform illumination of the finger within the 400-1100 nm spectral range was simulated and the photons exiting the joint were recorded using different acceptance angles. From the obtained reflectance and transmittance images the spectral and spatial features most indicative of inflammation were identified. Optimal acceptance angle and spectral bands were determined. This study demonstrates that proper selection of MSI system parameters critically affects ability of a MSI system to discriminate the unaffected and inflamed joints. The presented system design optimization approach could be applied to other pathologies.
Coastal modification of a scene employing multispectral images and vector operators.
Lira, Jorge
2017-05-01
Changes in sea level, wind patterns, sea current patterns, and tide patterns have produced morphologic transformations in the coastline area of Tamaulipas Sate in North East Mexico. Such changes generated a modification of the coastline and variations of the texture-relief and texture of the continental area of Tamaulipas. Two high-resolution multispectral satellite Satellites Pour l'Observation de la Terre images were employed to quantify the morphologic change of such continental area. The images cover a time span close to 10 years. A variant of the principal component analysis was used to delineate the modification of the land-water line. To quantify changes in texture-relief and texture, principal component analysis was applied to the multispectral images. The first principal components of each image were modeled as a discrete bidimensional vector field. The divergence and Laplacian vector operators were applied to the discrete vector field. The divergence provided the change of texture, while the Laplacian produced the change of texture-relief in the area of study.
Design and development of a simple UV fluorescence multi-spectral imaging system
NASA Astrophysics Data System (ADS)
Tovar, Carlos; Coker, Zachary; Yakovlev, Vladislav V.
2018-02-01
Healthcare access in low-resource settings is compromised by the availability of affordable and accurate diagnostic equipment. The four primary poverty-related diseases - AIDS, pneumonia, malaria, and tuberculosis - account for approximately 400 million annual deaths worldwide as of 2016 estimates. Current diagnostic procedures for these diseases are prolonged and can become unreliable under various conditions. We present the development of a simple low-cost UV fluorescence multi-spectral imaging system geared towards low resource settings for a variety of biological and in-vitro applications. Fluorescence microscopy serves as a useful diagnostic indicator and imaging tool. The addition of a multi-spectral imaging modality allows for the detection of fluorophores within specific wavelength bands, as well as the distinction between fluorophores possessing overlapping spectra. The developed instrument has the potential for a very diverse range of diagnostic applications in basic biomedical science and biomedical diagnostics and imaging. Performance assessment of the microscope will be validated with a variety of samples ranging from organic compounds to biological samples.
Hu, Zhenhua; Ma, Xiaowei; Qu, Xiaochao; Yang, Weidong; Liang, Jimin; Wang, Jing; Tian, Jie
2012-01-01
Cerenkov luminescence tomography (CLT) provides the three-dimensional (3D) radiopharmaceutical biodistribution in small living animals, which is vital to biomedical imaging. However, existing single-spectral and multispectral methods are not very efficient and effective at reconstructing the distribution of the radionuclide tracer. In this paper, we present a semi-quantitative Cerenkov radiation spectral characteristic-based source reconstruction method named the hybrid spectral CLT, to efficiently reconstruct the radionuclide tracer with both encouraging reconstruction results and less acquisition and image reconstruction time. We constructed the implantation mouse model implanted with a 400 µCi Na(131)I radioactive source and the physiological mouse model received an intravenous tail injection of 400 µCi radiopharmaceutical Iodine-131 (I-131) to validate the performance of the hybrid spectral CLT and compared the reconstruction results, acquisition, and image reconstruction time with that of single-spectral and multispectral CLT. Furthermore, we performed 3D noninvasive monitoring of I-131 uptake in the thyroid and quantified I-131 uptake in vivo using hybrid spectral CLT. Results showed that the reconstruction based on the hybrid spectral CLT was more accurate in localization and quantification than using single-spectral CLT, and was more efficient in the in vivo experiment compared with multispectral CLT. Additionally, 3D visualization of longitudinal observations suggested that the reconstructed energy of I-131 uptake in the thyroid increased with acquisition time and there was a robust correlation between the reconstructed energy versus the gamma ray counts of I-131 (r(2) = 0.8240). The ex vivo biodistribution experiment further confirmed the I-131 uptake in the thyroid for hybrid spectral CLT. Results indicated that hybrid spectral CLT could be potentially used for thyroid imaging to evaluate its function and monitor its treatment for thyroid cancer.
An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images
Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed
2018-01-01
Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang’s method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used. PMID:29762519
CART V: recent advancements in computer-aided camouflage assessment
NASA Astrophysics Data System (ADS)
Müller, Thomas; Müller, Markus
2011-05-01
In order to facilitate systematic, computer aided improvements of camouflage and concealment assessment methods, the software system CART (Camouflage Assessment in Real-Time) was built up for the camouflage assessment of objects in multispectral image sequences (see contributions to SPIE 2007-2010 [1], [2], [3], [4]). It comprises a semi-automatic marking of target objects (ground truth generation) including their propagation over the image sequence and the evaluation via user-defined feature extractors as well as methods to assess the object's movement conspicuity. In this fifth part in an annual series at the SPIE conference in Orlando, this paper presents the enhancements over the recent year and addresses the camouflage assessment of static and moving objects in multispectral image data that can show noise or image artefacts. The presented methods fathom the correlations between image processing and camouflage assessment. A novel algorithm is presented based on template matching to assess the structural inconspicuity of an object objectively and quantitatively. The results can easily be combined with an MTI (moving target indication) based movement conspicuity assessment function in order to explore the influence of object movement to a camouflage effect in different environments. As the results show, the presented methods contribute to a significant benefit in the field of camouflage assessment.
NASA Astrophysics Data System (ADS)
Onojeghuo, Alex Okiemute; Onojeghuo, Ajoke Ruth
2017-07-01
This study investigated the combined use of multispectral/hyperspectral imagery and LiDAR data for habitat mapping across parts of south Cumbria, North West England. The methodology adopted in this study integrated spectral information contained in pansharp QuickBird multispectral/AISA Eagle hyperspectral imagery and LiDAR-derived measures with object-based machine learning classifiers and ensemble analysis techniques. Using the LiDAR point cloud data, elevation models (such as the Digital Surface Model and Digital Terrain Model raster) and intensity features were extracted directly. The LiDAR-derived measures exploited in this study included Canopy Height Model, intensity and topographic information (i.e. mean, maximum and standard deviation). These three LiDAR measures were combined with spectral information contained in the pansharp QuickBird and Eagle MNF transformed imagery for image classification experiments. A fusion of pansharp QuickBird multispectral and Eagle MNF hyperspectral imagery with all LiDAR-derived measures generated the best classification accuracies, 89.8 and 92.6% respectively. These results were generated with the Support Vector Machine and Random Forest machine learning algorithms respectively. The ensemble analysis of all three learning machine classifiers for the pansharp QuickBird and Eagle MNF fused data outputs did not significantly increase the overall classification accuracy. Results of the study demonstrate the potential of combining either very high spatial resolution multispectral or hyperspectral imagery with LiDAR data for habitat mapping.
Multispectral Analysis of NMR Imagery
NASA Technical Reports Server (NTRS)
Butterfield, R. L.; Vannier, M. W. And Associates; Jordan, D.
1985-01-01
Conference paper discusses initial efforts to adapt multispectral satellite-image analysis to nuclear magnetic resonance (NMR) scans of human body. Flexibility of these techniques makes it possible to present NMR data in variety of formats, including pseudocolor composite images of pathological internal features. Techniques do not have to be greatly modified from form in which used to produce satellite maps of such Earth features as water, rock, or foliage.
NASA Astrophysics Data System (ADS)
Bell, James F.; Wellington, Danika; Hardgrove, Craig; Godber, Austin; Rice, Melissa S.; Johnson, Jeffrey R.; Fraeman, Abigail
2016-10-01
The Mars Science Laboratory (MSL) Curiosity rover Mastcam is a pair of multispectral CCD cameras that have been imaging the surface and atmosphere in three broadband visible RGB color channels as well as nine additional narrowband color channels between 400 and 1000 nm since the rover's landing in August 2012. As of Curiosity sol 1159 (the most recent PDS data release as of this writing), approximately 140 multispectral imaging targets have been imaged using all twelve unique bandpasses. Near-simultaneous imaging of an onboard calibration target allows rapid relative reflectance calibration of these data to radiance factor and estimated Lambert albedo, for direct comparison to lab reflectance spectra of rocks, minerals, and mixtures. Surface targets among this data set include a variety of outcrop and float rocks (some containing light-toned veins), unconsolidated pebbles and clasts, and loose sand and soil. Some of these targets have been brushed, scuffed, or otherwise disturbed by the rover in order to reveal the (less dusty) interiors of these materials, and those targets and each of Curiosity's drill holes and tailings piles have been specifically targeted for multispectral imaging.Analysis of the relative reflectance spectra of these materials, sometimes in concert with additional compositional and/or mineralogic information from Curiosity's ChemCam LIBS and passive-mode spectral data and CheMin XRD data, reveals the presence of relatively broad solid state crystal field and charge transfer absorption features characteristic of a variety of common iron-bearing phases, including hematite (both nanophase and crystalline), ferric sulfate, olivine, and pyroxene. In addition, Mastcam is sensitive to a weak hydration feature in the 900-1000 nm region that can provide insight on the hydration state of some of these phases, especially sulfates. Here we summarize the Mastcam multispectral data set and the major potential phase identifications made using that data set during the traverse so far in Gale crater, and describe the ways that Mastcam multispectral observations will continue to inform the ongoing ascent and exploration of Mt. Sharp, Gale crater's layered central mound of sedimentary rocks.
NASA Astrophysics Data System (ADS)
Ozolinsh, Maris; Fomins, Sergejs
2010-11-01
Multispectral color analysis was used for spectral scanning of Ishihara and Rabkin color deficiency test book images. It was done using tunable liquid-crystal LC filters built in the Nuance II analyzer. Multispectral analysis keeps both, information on spatial content of tests and on spectral content. Images were taken in the range of 420-720nm with a 10nm step. We calculated retina neural activity charts taking into account cone sensitivity functions, and processed charts in order to find the visibility of latent symbols in color deficiency plates using cross-correlation technique. In such way the quantitative measure is found for each of diagnostics plate for three different color deficiency carrier types - protanopes, deutanopes and tritanopes. Multispectral color analysis allows to determine the CIE xyz color coordinates of pseudoisochromatic plate design elements and to perform statistical analysis of these data to compare the color quality of available color deficiency test books.
NASA Astrophysics Data System (ADS)
Dong, Yang; He, Honghui; He, Chao; Ma, Hui
2016-10-01
Polarized light is sensitive to the microstructures of biological tissues and can be used to detect physiological changes. Meanwhile, spectral features of the scattered light can also provide abundant microstructural information of tissues. In this paper, we take the backscattering polarization Mueller matrix images of bovine skeletal muscle tissues during the 24-hour experimental time, and analyze their multispectral behavior using quantitative Mueller matrix parameters. In the processes of rigor mortis and proteolysis of muscle samples, multispectral frequency distribution histograms (FDHs) of the Mueller matrix elements can reveal rich qualitative structural information. In addition, we analyze the temporal variations of the sample using the multispectral Mueller matrix transformation (MMT) parameters. The experimental results indicate that the different stages of rigor mortis and proteolysis for bovine skeletal muscle samples can be judged by these MMT parameters. The results presented in this work show that combining with the multispectral technique, the FDHs and MMT parameters can characterize the microstructural variation features of skeletal muscle tissues. The techniques have the potential to be used as tools for quantitative assessment of meat qualities in food industry.
Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.
2002-01-01
Enhanced false color images from mid-IR, near-IR (NIR), and visible bands of the Landsat thematic mapper (TM) are commonly used for visually interpreting land cover type. Described here is a technique for sharpening or fusion of NIR with higher resolution panchromatic (Pan) that uses a shift-invariant implementation of the discrete wavelet transform (SIDWT) and a reported pixel-based selection rule to combine coefficients. There can be contrast reversals (e.g., at soil-vegetation boundaries between NIR and visible band images) and consequently degraded sharpening and edge artifacts. To improve performance for these conditions, I used a local area-based correlation technique originally reported for comparing image-pyramid-derived edges for the adaptive processing of wavelet-derived edge data. Also, using the redundant data of the SIDWT improves edge data generation. There is additional improvement because sharpened subband imagery is used with the edge-correlation process. A reported technique for sharpening three-band spectral imagery used forward and inverse intensity, hue, and saturation transforms and wavelet-based sharpening of intensity. This technique had limitations with opposite contrast data, and in this study sharpening was applied to single-band multispectral-Pan image pairs. Sharpening used simulated 30-m NIR imagery produced by degrading the spatial resolution of a higher resolution reference. Performance, evaluated by comparison between sharpened and reference image, was improved when sharpened subband data were used with the edge correlation.
Portable multispectral imaging system for oral cancer diagnosis
NASA Astrophysics Data System (ADS)
Hsieh, Yao-Fang; Ou-Yang, Mang; Lee, Cheng-Chung
2013-09-01
This study presents the portable multispectral imaging system that can acquire the image of specific spectrum in vivo for oral cancer diagnosis. According to the research literature, the autofluorescence of cells and tissue have been widely applied to diagnose oral cancer. The spectral distribution is difference for lesions of epithelial cells and normal cells after excited fluorescence. We have been developed the hyperspectral and multispectral techniques for oral cancer diagnosis in three generations. This research is the third generation. The excited and emission spectrum for the diagnosis are acquired from the research of first generation. The portable system for detection of oral cancer is modified for existing handheld microscope. The UV LED is used to illuminate the surface of oral cavity and excite the cells to produce fluorescent. The image passes through the central channel and filters out unwanted spectrum by the selection of filter, and focused by the focus lens on the image sensor. Therefore, we can achieve the specific wavelength image via fluorescence reaction. The specificity and sensitivity of the system are 85% and 90%, respectively.
NASA Astrophysics Data System (ADS)
Yang, Chun-Chieh; Kim, Moon S.; Chuang, Yung-Kun; Lee, Hoyoung
2013-05-01
This paper reports the development of a multispectral algorithm, using the line-scan hyperspectral imaging system, to detect fecal contamination on leafy greens. Fresh bovine feces were applied to the surfaces of washed loose baby spinach leaves. A hyperspectral line-scan imaging system was used to acquire hyperspectral fluorescence images of the contaminated leaves. Hyperspectral image analysis resulted in the selection of the 666 nm and 688 nm wavebands for a multispectral algorithm to rapidly detect feces on leafy greens, by use of the ratio of fluorescence intensities measured at those two wavebands (666 nm over 688 nm). The algorithm successfully distinguished most of the lowly diluted fecal spots (0.05 g feces/ml water and 0.025 g feces/ml water) and some of the highly diluted spots (0.0125 g feces/ml water and 0.00625 g feces/ml water) from the clean spinach leaves. The results showed the potential of the multispectral algorithm with line-scan imaging system for application to automated food processing lines for food safety inspection of leafy green vegetables.
Computer generated maps from digital satellite data - A case study in Florida
NASA Technical Reports Server (NTRS)
Arvanitis, L. G.; Reich, R. M.; Newburne, R.
1981-01-01
Ground cover maps are important tools to a wide array of users. Over the past three decades, much progress has been made in supplementing planimetric and topographic maps with ground cover details obtained from aerial photographs. The present investigation evaluates the feasibility of using computer maps of ground cover from satellite input tapes. Attention is given to the selection of test sites, a satellite data processing system, a multispectral image analyzer, general purpose computer-generated maps, the preliminary evaluation of computer maps, a test for areal correspondence, the preparation of overlays and acreage estimation of land cover types on the Landsat computer maps. There is every indication to suggest that digital multispectral image processing systems based on Landsat input data will play an increasingly important role in pattern recognition and mapping land cover in the years to come.
A scan-angle correction for thermal infrared multispectral data using side lapping images
Watson, K.
1996-01-01
Thermal infrared multispectral scanner (TIMS) images, acquired with side lapping flight lines, provide dual angle observations of the same area on the ground and can thus be used to estimate variations in the atmospheric transmission with scan angle. The method was tested using TIMS aircraft data for six flight lines with about 30% sidelap for an area within Joshua Tree National Park, California. Generally the results correspond to predictions for the transmission scan-angle coefficient based on a standard atmospheric model although some differences were observed at the longer wavelength channels. A change was detected for the last pair of lines that may indicate either spatial or temporal atmospheric variation. The results demonstrate that the method provides information for correcting regional survey data (requiring multiple adjacent flight lines) that can be important in detecting subtle changes in lithology.
The Multi-Spectral Imaging Diagnostic on Alcator C-MOD and TCV
NASA Astrophysics Data System (ADS)
Linehan, B. L.; Mumgaard, R. T.; Duval, B. P.; Theiler, C. G.; TCV Team
2017-10-01
The Multi-Spectral Imaging (MSI) diagnostic is a new instrument that captures simultaneous spectrally filtered images from a common sight view while maintaining a large tendue and high spatial resolution. The system uses a polychromator layout where each image is sequentially filtered. This procedure yields a high transmission for each spectral channel with minimal vignetting and aberrations. A four-wavelength system was installed on Alcator C-Mod and then moved to TCV. The system uses industrial cameras to simultaneously image the divertor region at 95 frames per second at f/# 2.8 via a coherent fiber bundle (C-Mod) or a lens-based relay optic (TCV). The images are absolutely calibrated and spatially registered enabling accurate measurement of atomic line ratios and absolute line intensities. The images will be used to study divertor detachment by imaging impurities and Balmer series emissions. Furthermore, the large field of view and an ability to support many types of detectors opens the door for other novel approaches to optically measuring plasma with high temporal, spatial, and spectral resolution. Such measurements will allow for the study of Stark broadening and divertor turbulence. Here, we present the first measurements taken with this cavity imaging system. USDoE awards DE-FC02-99ER54512 and award DE-AC05-06OR23100, ORISE, administered by ORAU.
Hyperspectral imaging for food processing automation
NASA Astrophysics Data System (ADS)
Park, Bosoon; Lawrence, Kurt C.; Windham, William R.; Smith, Doug P.; Feldner, Peggy W.
2002-11-01
This paper presents the research results that demonstrates hyperspectral imaging could be used effectively for detecting feces (from duodenum, ceca, and colon) and ingesta on the surface of poultry carcasses, and potential application for real-time, on-line processing of poultry for automatic safety inspection. The hyperspectral imaging system included a line scan camera with prism-grating-prism spectrograph, fiber optic line lighting, motorized lens control, and hyperspectral image processing software. Hyperspectral image processing algorithms, specifically band ratio of dual-wavelength (565/517) images and thresholding were effective on the identification of fecal and ingesta contamination of poultry carcasses. A multispectral imaging system including a common aperture camera with three optical trim filters (515.4 nm with 8.6- nm FWHM), 566.4 nm with 8.8-nm FWHM, and 631 nm with 10.2-nm FWHM), which were selected and validated by a hyperspectral imaging system, was developed for a real-time, on-line application. A total image processing time required to perform the current multispectral images captured by a common aperture camera was approximately 251 msec or 3.99 frames/sec. A preliminary test shows that the accuracy of real-time multispectral imaging system to detect feces and ingesta on corn/soybean fed poultry carcasses was 96%. However, many false positive spots that cause system errors were also detected.
Multispectral and polarimetric photodetection using a plasmonic metasurface
NASA Astrophysics Data System (ADS)
Pelzman, Charles; Cho, Sang-Yeon
2018-01-01
We present a metasurface-integrated Si 2-D CMOS sensor array for multispectral and polarimetric photodetection applications. The demonstrated sensor is based on the polarization selective extraordinary optical transmission from periodic subwavelength nanostructures, acting as artificial atoms, known as meta-atoms. The meta-atoms were created by patterning periodic rectangular apertures that support optical resonance at the designed spectral bands. By spatially separating meta-atom clusters with different lattice constants and orientations, the demonstrated metasurface can convert the polarization and spectral information of an optical input into a 2-D intensity pattern. As a proof-of-concept experiment, we measured the linear components of the Stokes parameters directly from captured images using a CMOS camera at four spectral bands. Compared to existing multispectral polarimetric sensors, the demonstrated metasurface-integrated CMOS system is compact and does not require any moving components, offering great potential for advanced photodetection applications.
NASA Astrophysics Data System (ADS)
Kupinski, Meredith; Rehbinder, Jean; Haddad, Huda; Deby, Stanislas; Vizet, Jérémy; Teig, Benjamin; Nazac, André; Pierangelo, Angelo; Moreau, François; Novikova, Tatiana
2017-07-01
Significant contrast in visible wavelength Mueller matrix images for healthy and pre-cancerous regions of excised cervical tissue is shown. A novel classification algorithm is used to compute a test statistic from a small patient population.
Assessing carotid atherosclerosis by fiber-optic multispectral photoacoustic tomography
NASA Astrophysics Data System (ADS)
Hui, Jie; Li, Rui; Wang, Pu; Phillips, Evan; Bruning, Rebecca; Liao, Chien-Sheng; Sturek, Michael; Goergen, Craig J.; Cheng, Ji-Xin
2015-03-01
Atherosclerotic plaque at the carotid bifurcation is the underlying cause of the majority of ischemic strokes. Noninvasive imaging and quantification of the compositional changes preceding gross anatomic changes within the arterial wall is essential for diagnosis of disease. Current imaging modalities such as duplex ultrasound, computed tomography, positron emission tomography are limited by the lack of compositional contrast and the detection of flow-limiting lesions. Although high-resolution magnetic resonance imaging has been developed to characterize atherosclerotic plaque composition, its accessibility for wide clinical use is limited. Here, we demonstrate a fiber-based multispectral photoacoustic tomography system for excitation of lipids and external acoustic detection of the generated ultrasound. Using sequential ultrasound imaging of ex vivo preparations we achieved ~2 cm imaging depth and chemical selectivity for assessment of human arterial plaques. A multivariate curve resolution alternating least squares analysis method was applied to resolve the major chemical components, including intravascular lipid, intramuscular fat, and blood. These results show the promise of detecting carotid plaque in vivo through esophageal fiber-optic excitation of lipids and external acoustic detection of the generated ultrasound. This imaging system has great potential for serving as a point-ofcare device for early diagnosis of carotid artery disease in the clinic.
Multispectral Imaging in Cultural Heritage Conservation
NASA Astrophysics Data System (ADS)
Del Pozo, S.; Rodríguez-Gonzálvez, P.; Sánchez-Aparicio, L. J.; Muñoz-Nieto, A.; Hernández-López, D.; Felipe-García, B.; González-Aguilera, D.
2017-08-01
This paper sums up the main contribution derived from the thesis entitled "Multispectral imaging for the analysis of materials and pathologies in civil engineering, constructions and natural spaces" awarded by CIPA-ICOMOS for its connection with the preservation of Cultural Heritage. This thesis is framed within close-range remote sensing approaches by the fusion of sensors operating in the optical domain (visible to shortwave infrared spectrum). In the field of heritage preservation, multispectral imaging is a suitable technique due to its non-destructive nature and its versatility. It combines imaging and spectroscopy to analyse materials and land covers and enables the use of a variety of different geomatic sensors for this purpose. These sensors collect both spatial and spectral information for a given scenario and a specific spectral range, so that, their smaller storage units save the spectral properties of the radiation reflected by the surface of interest. The main goal of this research work is to characterise different construction materials as well as the main pathologies of Cultural Heritage elements by combining active and passive sensors recording data in different ranges. Conclusions about the suitability of each type of sensor and spectral range are drawn in relation to each particular case study and damage. It should be emphasised that results are not limited to images, since 3D intensity data from laser scanners can be integrated with 2D data from passive sensors obtaining high quality products due to the added value that metric brings to multispectral images.
Integration of aerial remote sensing imaging data in a 3D-GIS environment
NASA Astrophysics Data System (ADS)
Moeller, Matthias S.
2003-03-01
For some years sensor systems have been available providing digital images of a new quality. Especially aerial stereo scanners acquire digital multispectral images with an extremely high ground resolution of about 0.10 - 0.15m and provide in addition a Digital Surface Models (DSM). These imaging products both can be used for a detailed monitoring at scales up to 1:500. The processed georeferenced multispectral orthoimages can be readily integrated into GIS making them useful for a number of applications. The DSM, derived from forward and backward facing sensors of an aerial imaging system provides a ground resolution of 0.5 m and can be used for 3D visualization purposes. In some cases it is essential, to store the ground elevation as a Digital Terrain Model (DTM) and also the height of 3-dimensional objects in a separated database. Existing automated algorithms do not work precise for the extraction of DTM from aerial scanner DSM. This paper presents a new approach which combines the visible image data and the DSM data for the generation of DTM with a reliable geometric accuracy. Already existing cadastral data can be used as a knowledge base for the extraction of building heights in cities. These elevation data is the essential source for a GIS based urban information system with a 3D visualization component.
NASA Astrophysics Data System (ADS)
Yankelevich, Diego R.; Ma, Dinglong; Liu, Jing; Sun, Yang; Sun, Yinghua; Bec, Julien; Elson, Daniel S.; Marcu, Laura
2014-03-01
The application of time-resolved fluorescence spectroscopy (TRFS) to in vivo tissue diagnosis requires a method for fast acquisition of fluorescence decay profiles in multiple spectral bands. This study focusses on development of a clinically compatible fiber-optic based multispectral TRFS (ms-TRFS) system together with validation of its accuracy and precision for fluorescence lifetime measurements. It also presents the expansion of this technique into an imaging spectroscopy method. A tandem array of dichroic beamsplitters and filters was used to record TRFS decay profiles at four distinct spectral bands where biological tissue typically presents fluorescence emission maxima, namely, 390, 452, 542, and 629 nm. Each emission channel was temporally separated by using transmission delays through 200 μm diameter multimode optical fibers of 1, 10, 19, and 28 m lengths. A Laguerre-expansion deconvolution algorithm was used to compensate for modal dispersion inherent to large diameter optical fibers and the finite bandwidth of detectors and digitizers. The system was found to be highly efficient and fast requiring a few nano-Joule of laser pulse energy and <1 ms per point measurement, respectively, for the detection of tissue autofluorescent components. Organic and biological chromophores with lifetimes that spanned a 0.8-7 ns range were used for system validation, and the measured lifetimes from the organic fluorophores deviated by less than 10% from values reported in the literature. Multi-spectral lifetime images of organic dye solutions contained in glass capillary tubes were recorded by raster scanning the single fiber probe in a 2D plane to validate the system as an imaging tool. The lifetime measurement variability was measured indicating that the system provides reproducible results with a standard deviation smaller than 50 ps. The ms-TRFS is a compact apparatus that makes possible the fast, accurate, and precise multispectral time-resolved fluorescence lifetime measurements of low quantum efficiency sub-nanosecond fluorophores.
Image processing of underwater multispectral imagery
Zawada, D. G.
2003-01-01
Capturing in situ fluorescence images of marine organisms presents many technical challenges. The effects of the medium, as well as the particles and organisms within it, are intermixed with the desired signal. Methods for extracting and preparing the imagery for analysis are discussed in reference to a novel underwater imaging system called the low-light-level underwater multispectral imaging system (LUMIS). The instrument supports both uni- and multispectral collections, each of which is discussed in the context of an experimental application. In unispectral mode, LUMIS was used to investigate the spatial distribution of phytoplankton. A thin sheet of laser light (532 nm) induced chlorophyll fluorescence in the phytoplankton, which was recorded by LUMIS. Inhomogeneities in the light sheet led to the development of a beam-pattern-correction algorithm. Separating individual phytoplankton cells from a weak background fluorescence field required a two-step procedure consisting of edge detection followed by a series of binary morphological operations. In multispectral mode, LUMIS was used to investigate the bio-assay potential of fluorescent pigments in corals. Problems with the commercial optical-splitting device produced nonlinear distortions in the imagery. A tessellation algorithm, including an automated tie-point-selection procedure, was developed to correct the distortions. Only pixels corresponding to coral polyps were of interest for further analysis. Extraction of these pixels was performed by a dynamic global-thresholding algorithm.
NASA Astrophysics Data System (ADS)
Zhou, Y.; Zhao, H.; Hao, H.; Wang, C.
2018-05-01
Accurate remote sensing water extraction is one of the primary tasks of watershed ecological environment study. Since the Yanhe water system has typical characteristics of a small water volume and narrow river channel, which leads to the difficulty for conventional water extraction methods such as Normalized Difference Water Index (NDWI). A new Multi-Spectral Threshold segmentation of the NDWI (MST-NDWI) water extraction method is proposed to achieve the accurate water extraction in Yanhe watershed. In the MST-NDWI method, the spectral characteristics of water bodies and typical backgrounds on the Landsat/TM images have been evaluated in Yanhe watershed. The multi-spectral thresholds (TM1, TM4, TM5) based on maximum-likelihood have been utilized before NDWI water extraction to realize segmentation for a division of built-up lands and small linear rivers. With the proposed method, a water map is extracted from the Landsat/TM images in 2010 in China. An accuracy assessment is conducted to compare the proposed method with the conventional water indexes such as NDWI, Modified NDWI (MNDWI), Enhanced Water Index (EWI), and Automated Water Extraction Index (AWEI). The result shows that the MST-NDWI method generates better water extraction accuracy in Yanhe watershed and can effectively diminish the confusing background objects compared to the conventional water indexes. The MST-NDWI method integrates NDWI and Multi-Spectral Threshold segmentation algorithms, with richer valuable information and remarkable results in accurate water extraction in Yanhe watershed.
a New Object-Based Framework to Detect Shodows in High-Resolution Satellite Imagery Over Urban Areas
NASA Astrophysics Data System (ADS)
Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.
2015-12-01
In this paper a new object-based framework to detect shadow areas in high resolution satellite images is proposed. To produce shadow map in pixel level state of the art supervised machine learning algorithms are employed. Automatic ground truth generation based on Otsu thresholding on shadow and non-shadow indices is used to train the classifiers. It is followed by segmenting the image scene and create image objects. To detect shadow objects, a majority voting on pixel-based shadow detection result is designed. GeoEye-1 multi-spectral image over an urban area in Qom city of Iran is used in the experiments. Results shows the superiority of our proposed method over traditional pixel-based, visually and quantitatively.
Integrative Multi-Spectral Sensor Device for Far-Infrared and Visible Light Fusion
NASA Astrophysics Data System (ADS)
Qiao, Tiezhu; Chen, Lulu; Pang, Yusong; Yan, Gaowei
2018-06-01
Infrared and visible light image fusion technology is a hot spot in the research of multi-sensor fusion technology in recent years. Existing infrared and visible light fusion technologies need to register before fusion because of using two cameras. However, the application effect of the registration technology has yet to be improved. Hence, a novel integrative multi-spectral sensor device is proposed for infrared and visible light fusion, and by using the beam splitter prism, the coaxial light incident from the same lens is projected to the infrared charge coupled device (CCD) and visible light CCD, respectively. In this paper, the imaging mechanism of the proposed sensor device is studied with the process of the signals acquisition and fusion. The simulation experiment, which involves the entire process of the optic system, signal acquisition, and signal fusion, is constructed based on imaging effect model. Additionally, the quality evaluation index is adopted to analyze the simulation result. The experimental results demonstrate that the proposed sensor device is effective and feasible.
Multispectral Mosaic of the Aristarchus Crater and Plateau
1998-06-03
The Aristarchus region is one of the most diverse and interesting areas on the Moon. About 500 images from NASA's Clementine spacecraft were processed and combined into a multispectral mosaic of this region. http://photojournal.jpl.nasa.gov/catalog/PIA00090
Pixel-based parametric source depth map for Cerenkov luminescence imaging
NASA Astrophysics Data System (ADS)
Altabella, L.; Boschi, F.; Spinelli, A. E.
2016-01-01
Optical tomography represents a challenging problem in optical imaging because of the intrinsically ill-posed inverse problem due to photon diffusion. Cerenkov luminescence tomography (CLT) for optical photons produced in tissues by several radionuclides (i.e.: 32P, 18F, 90Y), has been investigated using both 3D multispectral approach and multiviews methods. Difficult in convergence of 3D algorithms can discourage to use this technique to have information of depth and intensity of source. For these reasons, we developed a faster 2D corrected approach based on multispectral acquisitions, to obtain source depth and its intensity using a pixel-based fitting of source intensity. Monte Carlo simulations and experimental data were used to develop and validate the method to obtain the parametric map of source depth. With this approach we obtain parametric source depth maps with a precision between 3% and 7% for MC simulation and 5-6% for experimental data. Using this method we are able to obtain reliable information about the source depth of Cerenkov luminescence with a simple and flexible procedure.
NASA Astrophysics Data System (ADS)
Costanzo, Antonio; Montuori, Antonio; Silva, Juan Pablo; Silvestri, Malvina; Musacchio, Massimo; Buongiorno, Maria Fabrizia; Stramondo, Salvatore
2016-08-01
In this work, a web-GIS procedure to map the risk of road blockage in urban environments through the combined use of space-borne and airborne remote sensing sensors is presented. The methodology concerns (1) the provision of a geo-database through the integration of space-borne multispectral images and airborne LiDAR data products; (2) the modeling of building vulnerability, based on the corresponding 3D geometry and construction time information; (3) the GIS-based mapping of road closure due to seismic- related building collapses based on the building characteristic height and the width of the road. Experimental results, gathered for the Cosenza urban area, allow demonstrating the benefits of both the proposed approach and the GIS-based integration of multi-platforms remote sensing sensors and techniques for seismic road assessment purposes.
NASA Astrophysics Data System (ADS)
Chen, Chung-Hao; Yao, Yi; Chang, Hong; Koschan, Andreas; Abidi, Mongi
2013-06-01
Due to increasing security concerns, a complete security system should consist of two major components, a computer-based face-recognition system and a real-time automated video surveillance system. A computerbased face-recognition system can be used in gate access control for identity authentication. In recent studies, multispectral imaging and fusion of multispectral narrow-band images in the visible spectrum have been employed and proven to enhance the recognition performance over conventional broad-band images, especially when the illumination changes. Thus, we present an automated method that specifies the optimal spectral ranges under the given illumination. Experimental results verify the consistent performance of our algorithm via the observation that an identical set of spectral band images is selected under all tested conditions. Our discovery can be practically used for a new customized sensor design associated with given illuminations for an improved face recognition performance over conventional broad-band images. In addition, once a person is authorized to enter a restricted area, we still need to continuously monitor his/her activities for the sake of security. Because pantilt-zoom (PTZ) cameras are capable of covering a panoramic area and maintaining high resolution imagery for real-time behavior understanding, researches in automated surveillance systems with multiple PTZ cameras have become increasingly important. Most existing algorithms require the prior knowledge of intrinsic parameters of the PTZ camera to infer the relative positioning and orientation among multiple PTZ cameras. To overcome this limitation, we propose a novel mapping algorithm that derives the relative positioning and orientation between two PTZ cameras based on a unified polynomial model. This reduces the dependence on the knowledge of intrinsic parameters of PTZ camera and relative positions. Experimental results demonstrate that our proposed algorithm presents substantially reduced computational complexity and improved flexibility at the cost of slightly decreased pixel accuracy as compared to Chen and Wang's method [18].
Bautista, Pinky A; Yagi, Yukako
2011-01-01
In this paper we introduced a digital staining method for histopathology images captured with an n-band multispectral camera. The method consisted of two major processes: enhancement of the original spectral transmittance and the transformation of the enhanced transmittance to its target spectral configuration. Enhancement is accomplished by shifting the original transmittance with the scaled difference between the original transmittance and the transmittance estimated with m dominant principal component (PC) vectors;the m-PC vectors were determined from the transmittance samples of the background image. Transformation of the enhanced transmittance to the target spectral configuration was done using an nxn transformation matrix, which was derived by applying a least square method to the enhanced and target spectral training data samples of the different tissue components. Experimental results on the digital conversion of a hematoxylin and eosin (H&E) stained multispectral image to its Masson's trichrome stained (MT) equivalent shows the viability of the method.
Harada, Ryuichi; Okamura, Nobuyuki; Furumoto, Shozo; Yoshikawa, Takeo; Arai, Hiroyuki; Yanai, Kazuhiko; Kudo, Yukitsuka
2014-02-01
Selective visualization of amyloid-β and tau protein deposits will help to understand the pathophysiology of Alzheimer's disease (AD). Here, we introduce a novel fluorescent probe that can distinguish between these two deposits by multispectral fluorescence imaging technique. Fluorescence spectral analysis was performed using AD brain sections stained with novel fluorescence compounds. Competitive binding assay using [(3)H]-PiB was performed to evaluate the binding affinity of BF-188 for synthetic amyloid-β (Aβ) and tau fibrils. In AD brain sections, BF-188 clearly stained Aβ and tau protein deposits with different fluorescence spectra. In vitro binding assays indicated that BF-188 bound to both amyloid-β and tau fibrils with high affinity (K i < 10 nM). In addition, BF-188 showed an excellent blood-brain barrier permeability in mice. Multispectral imaging with BF-188 could potentially be used for selective in vivo imaging of tau deposits as well as amyloid-β in the brain.
VIS-NIR multispectral synchronous imaging pyrometer for high-temperature measurements.
Fu, Tairan; Liu, Jiangfan; Tian, Jibin
2017-06-01
A visible-infrared multispectral synchronous imaging pyrometer was developed for simultaneous, multispectral, two-dimensional high temperature measurements. The multispectral image pyrometer uses prism separation construction in the spectrum range of 650-950 nm and multi-sensor fusion of three CCD sensors for high-temperature measurements. The pyrometer had 650-750 nm, 750-850 nm, and 850-950 nm channels all with the same optical path. The wavelength choice for each channel is flexible with three center wavelengths (700 nm, 810 nm, and 920 nm) with a full width at half maximum of the spectrum of 3 nm used here. The three image sensors were precisely aligned to avoid spectrum artifacts by micro-mechanical adjustments of the sensors relative to each other to position them within a quarter pixel of each other. The pyrometer was calibrated with the standard blackbody source, and the temperature measurement uncertainty was within 0.21 °C-0.99 °C in the temperatures of 600 °C-1800 °C for the blackbody measurements. The pyrometer was then used to measure the leading edge temperatures of a ceramics model exposed to high-enthalpy plasma aerodynamic heating environment to verify the system applicability. The measured temperature ranges are 701-991 °C, 701-1134 °C, and 701-834 °C at the heating transient, steady state, and cooling transient times. A significant temperature gradient (170 °C/mm) was observed away from the leading edge facing the plasma jet during the steady state heating time. The temperature non-uniformity on the surface occurs during the entire aerodynamic heating process. However, the temperature distribution becomes more uniform after the heater is shut down and the experimental model is naturally cooled. This result shows that the multispectral simultaneous image measurement mode provides a wider temperature range for one imaging measurement of high spatial temperature gradients in transient applications.
The Athena Pancam and Color Microscopic Imager (CMI)
NASA Technical Reports Server (NTRS)
Bell, J. F., III; Herkenhoff, K. E.; Schwochert, M.; Morris, R. V.; Sullivan, R.
2000-01-01
The Athena Mars rover payload includes two primary science-grade imagers: Pancam, a multispectral, stereo, panoramic camera system, and the Color Microscopic Imager (CMI), a multispectral and variable depth-of-field microscope. Both of these instruments will help to achieve the primary Athena science goals by providing information on the geology, mineralogy, and climate history of the landing site. In addition, Pancam provides important support for rover navigation and target selection for Athena in situ investigations. Here we describe the science goals, instrument designs, and instrument performance of the Pancam and CMI investigations.
Component pattern analysis of chemicals using multispectral THz imaging system
NASA Astrophysics Data System (ADS)
Kawase, Kodo; Ogawa, Yuichi; Watanabe, Yuki
2004-04-01
We have developed a novel basic technology for terahertz (THz) imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral transillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.
NASA Technical Reports Server (NTRS)
Manohar, Mareboyana; Tilton, James C.
1994-01-01
A progressive vector quantization (VQ) compression approach is discussed which decomposes image data into a number of levels using full search VQ. The final level is losslessly compressed, enabling lossless reconstruction. The computational difficulties are addressed by implementation on a massively parallel SIMD machine. We demonstrate progressive VQ on multispectral imagery obtained from the Advanced Very High Resolution Radiometer instrument and other Earth observation image data, and investigate the trade-offs in selecting the number of decomposition levels and codebook training method.
Laser- and Multi-Spectral Monitoring of Natural Objects from UAVs
NASA Astrophysics Data System (ADS)
Reiterer, Alexander; Frey, Simon; Koch, Barbara; Stemmler, Simon; Weinacker, Holger; Hoffmann, Annemarie; Weiler, Markus; Hergarten, Stefan
2016-04-01
The paper describes the research, development and evaluation of a lightweight sensor system for UAVs. The system is composed of three main components: (1) a laser scanning module, (2) a multi-spectral camera system, and (3) a processing/storage unit. All three components are newly developed. Beside measurement precision and frequency, the low weight has been one of the challenging tasks. The current system has a total weight of about 2.5 kg and is designed as a self-contained unit (incl. storage and battery units). The main features of the system are: laser-based multi-echo 3D measurement by a wavelength of 905 nm (totally eye save), measurement range up to 200 m, measurement frequency of 40 kHz, scanning frequency of 16 Hz, relative distance accuracy of 10 mm. The system is equipped with both GNSS and IMU. Alternatively, a multi-visual-odometry system has been integrated to estimate the trajectory of the UAV by image features (based on this system a calculation of 3D-coordinates without GNSS is possible). The integrated multi-spectral camera system is based on conventional CMOS-image-chips equipped with a special sets of band-pass interference filters with a full width half maximum (FWHM) of 50 nm. Good results for calculating the normalized difference vegetation index (NDVI) and the wide dynamic range vegetation index (WDRVI) have been achieved using the band-pass interference filter-set with a FWHM of 50 nm and an exposure times between 5.000 μs and 7.000 μs. The system is currently used for monitoring of natural objects and surfaces, like forest, as well as for geo-risk analysis (landslides). By measuring 3D-geometric and multi-spectral information a reliable monitoring and interpretation of the data-set is possible. The paper gives an overview about the development steps, the system, the evaluation and first results.
Enhancing Spatial Resolution of Remotely Sensed Imagery Using Deep Learning
NASA Astrophysics Data System (ADS)
Beck, J. M.; Bridges, S.; Collins, C.; Rushing, J.; Graves, S. J.
2017-12-01
Researchers at the Information Technology and Systems Center at the University of Alabama in Huntsville are using Deep Learning with Convolutional Neural Networks (CNNs) to develop a method for enhancing the spatial resolutions of moderate resolution (10-60m) multispectral satellite imagery. This enhancement will effectively match the resolutions of imagery from multiple sensors to provide increased global temporal-spatial coverage for a variety of Earth science products. Our research is centered on using Deep Learning for automatically generating transformations for increasing the spatial resolution of remotely sensed images with different spatial, spectral, and temporal resolutions. One of the most important steps in using images from multiple sensors is to transform the different image layers into the same spatial resolution, preferably the highest spatial resolution, without compromising the spectral information. Recent advances in Deep Learning have shown that CNNs can be used to effectively and efficiently upscale or enhance the spatial resolution of multispectral images with the use of an auxiliary data source such as a high spatial resolution panchromatic image. In contrast, we are using both the spatial and spectral details inherent in low spatial resolution multispectral images for image enhancement without the use of a panchromatic image. This presentation will discuss how this technology will benefit many Earth Science applications that use remotely sensed images with moderate spatial resolutions.
NASA Astrophysics Data System (ADS)
Maimaitijiang, Maitiniyazi; Ghulam, Abduwasit; Sidike, Paheding; Hartling, Sean; Maimaitiyiming, Matthew; Peterson, Kyle; Shavers, Ethan; Fishman, Jack; Peterson, Jim; Kadam, Suhas; Burken, Joel; Fritschi, Felix
2017-12-01
Estimating crop biophysical and biochemical parameters with high accuracy at low-cost is imperative for high-throughput phenotyping in precision agriculture. Although fusion of data from multiple sensors is a common application in remote sensing, less is known on the contribution of low-cost RGB, multispectral and thermal sensors to rapid crop phenotyping. This is due to the fact that (1) simultaneous collection of multi-sensor data using satellites are rare and (2) multi-sensor data collected during a single flight have not been accessible until recent developments in Unmanned Aerial Systems (UASs) and UAS-friendly sensors that allow efficient information fusion. The objective of this study was to evaluate the power of high spatial resolution RGB, multispectral and thermal data fusion to estimate soybean (Glycine max) biochemical parameters including chlorophyll content and nitrogen concentration, and biophysical parameters including Leaf Area Index (LAI), above ground fresh and dry biomass. Multiple low-cost sensors integrated on UASs were used to collect RGB, multispectral, and thermal images throughout the growing season at a site established near Columbia, Missouri, USA. From these images, vegetation indices were extracted, a Crop Surface Model (CSM) was advanced, and a model to extract the vegetation fraction was developed. Then, spectral indices/features were combined to model and predict crop biophysical and biochemical parameters using Partial Least Squares Regression (PLSR), Support Vector Regression (SVR), and Extreme Learning Machine based Regression (ELR) techniques. Results showed that: (1) For biochemical variable estimation, multispectral and thermal data fusion provided the best estimate for nitrogen concentration and chlorophyll (Chl) a content (RMSE of 9.9% and 17.1%, respectively) and RGB color information based indices and multispectral data fusion exhibited the largest RMSE 22.6%; the highest accuracy for Chl a + b content estimation was obtained by fusion of information from all three sensors with an RMSE of 11.6%. (2) Among the plant biophysical variables, LAI was best predicted by RGB and thermal data fusion while multispectral and thermal data fusion was found to be best for biomass estimation. (3) For estimation of the above mentioned plant traits of soybean from multi-sensor data fusion, ELR yields promising results compared to PLSR and SVR in this study. This research indicates that fusion of low-cost multiple sensor data within a machine learning framework can provide relatively accurate estimation of plant traits and provide valuable insight for high spatial precision in agriculture and plant stress assessment.
Spectral imaging perspective on cytomics.
Levenson, Richard M
2006-07-01
Cytomics involves the analysis of cellular morphology and molecular phenotypes, with reference to tissue architecture and to additional metadata. To this end, a variety of imaging and nonimaging technologies need to be integrated. Spectral imaging is proposed as a tool that can simplify and enrich the extraction of morphological and molecular information. Simple-to-use instrumentation is available that mounts on standard microscopes and can generate spectral image datasets with excellent spatial and spectral resolution; these can be exploited by sophisticated analysis tools. This report focuses on brightfield microscopy-based approaches. Cytological and histological samples were stained using nonspecific standard stains (Giemsa; hematoxylin and eosin (H&E)) or immunohistochemical (IHC) techniques employing three chromogens plus a hematoxylin counterstain. The samples were imaged using the Nuance system, a commercially available, liquid-crystal tunable-filter-based multispectral imaging platform. The resulting data sets were analyzed using spectral unmixing algorithms and/or learn-by-example classification tools. Spectral unmixing of Giemsa-stained guinea-pig blood films readily classified the major blood elements. Machine-learning classifiers were also successful at the same task, as well in distinguishing normal from malignant regions in a colon-cancer example, and in delineating regions of inflammation in an H&E-stained kidney sample. In an example of a multiplexed ICH sample, brown, red, and blue chromogens were isolated into separate images without crosstalk or interference from the (also blue) hematoxylin counterstain. Cytomics requires both accurate architectural segmentation as well as multiplexed molecular imaging to associate molecular phenotypes with relevant cellular and tissue compartments. Multispectral imaging can assist in both these tasks, and conveys new utility to brightfield-based microscopy approaches. Copyright 2006 International Society for Analytical Cytology.
Multispectral laser-induced fluorescence imaging system for large biological samples
NASA Astrophysics Data System (ADS)
Kim, Moon S.; Lefcourt, Alan M.; Chen, Yud-Ren
2003-07-01
A laser-induced fluorescence imaging system developed to capture multispectral fluorescence emission images simultaneously from a relatively large target object is described. With an expanded, 355-nm Nd:YAG laser as the excitation source, the system captures fluorescence emission images in the blue, green, red, and far-red regions of the spectrum centered at 450, 550, 678, and 730 nm, respectively, from a 30-cm-diameter target area in ambient light. Images of apples and of pork meat artificially contaminated with diluted animal feces have demonstrated the versatility of fluorescence imaging techniques for potential applications in food safety inspection. Regions of contamination, including sites that were not readily visible to the human eye, could easily be identified from the images.
NASA Astrophysics Data System (ADS)
Sahoo, Sujit Kumar; Tang, Dongliang; Dang, Cuong
2018-02-01
Large field of view multispectral imaging through scattering medium is a fundamental quest in optics community. It has gained special attention from researchers in recent years for its wide range of potential applications. However, the main bottlenecks of the current imaging systems are the requirements on specific illumination, poor image quality and limited field of view. In this work, we demonstrated a single-shot high-resolution colour-imaging through scattering media using a monochromatic camera. This novel imaging technique is enabled by the spatial, spectral decorrelation property and the optical memory effect of the scattering media. Moreover the use of deconvolution image processing further annihilate above-mentioned drawbacks arise due iterative refocusing, scanning or phase retrieval procedures.
NASA Astrophysics Data System (ADS)
Navratil, Peter; Wilps, Hans
2013-01-01
Three different object-based image classification techniques are applied to high-resolution satellite data for the mapping of the habitats of Asian migratory locust (Locusta migratoria migratoria) in the southern Aral Sea basin, Uzbekistan. A set of panchromatic and multispectral Système Pour l'Observation de la Terre-5 satellite images was spectrally enhanced by normalized difference vegetation index and tasseled cap transformation and segmented into image objects, which were then classified by three different classification approaches: a rule-based hierarchical fuzzy threshold (HFT) classification method was compared to a supervised nearest neighbor classifier and classification tree analysis by the quick, unbiased, efficient statistical trees algorithm. Special emphasis was laid on the discrimination of locust feeding and breeding habitats due to the significance of this discrimination for practical locust control. Field data on vegetation and land cover, collected at the time of satellite image acquisition, was used to evaluate classification accuracy. The results show that a robust HFT classifier outperformed the two automated procedures by 13% overall accuracy. The classification method allowed a reliable discrimination of locust feeding and breeding habitats, which is of significant importance for the application of the resulting data for an economically and environmentally sound control of locust pests because exact spatial knowledge on the habitat types allows a more effective surveying and use of pesticides.
2011-03-01
electromagnetic spectrum. With the availability of multispectral and hyperspectral systems, both spatial and spectral information for a scene are...an image. The boundary conditions for NDGRI and NDSI are set from diffuse spectral reflectance values for the range of skin types determined in [28...wearing no standard uniform and blending into the urban population. To assist with enemy detection and tracking, imaging systems that acquire spectral
Simple models for complex natural surfaces - A strategy for the hyperspectral era of remote sensing
NASA Technical Reports Server (NTRS)
Adams, John B.; Smith, Milton O.; Gillespie, Alan R.
1989-01-01
A two-step strategy for analyzing multispectral images is described. In the first step, the analyst decomposes the signal from each pixel (as expressed by the radiance or reflectance values in each channel) into components that are contributed by spectrally distinct materials on the ground, and those that are due to atmospheric effects, instrumental effects, and other factors, such as illumination. In the second step, the isolated signals from the materials on the ground are selectively edited, and recombined to form various unit maps that are interpretable within the framework of field units. The approach has been tested on multispectral images of a variety of natural land surfaces ranging from hyperarid deserts to tropical rain forests. Data were analyzed from Landsat MSS (multispectral scanner) and TM (Thematic Mapper), the airborne NS001 TM simulator, Viking Lander and Orbiter, AIS, and AVRIS (Airborne Visible and Infrared Imaging Spectrometer).
On-line object feature extraction for multispectral scene representation
NASA Technical Reports Server (NTRS)
Ghassemian, Hassan; Landgrebe, David
1988-01-01
A new on-line unsupervised object-feature extraction method is presented that reduces the complexity and costs associated with the analysis of the multispectral image data and data transmission, storage, archival and distribution. The ambiguity in the object detection process can be reduced if the spatial dependencies, which exist among the adjacent pixels, are intelligently incorporated into the decision making process. The unity relation was defined that must exist among the pixels of an object. Automatic Multispectral Image Compaction Algorithm (AMICA) uses the within object pixel-feature gradient vector as a valuable contextual information to construct the object's features, which preserve the class separability information within the data. For on-line object extraction the path-hypothesis and the basic mathematical tools for its realization are introduced in terms of a specific similarity measure and adjacency relation. AMICA is applied to several sets of real image data, and the performance and reliability of features is evaluated.
NASA Technical Reports Server (NTRS)
1982-01-01
Model II Multispectral Camera is an advanced aerial camera that provides optimum enhancement of a scene by recording spectral signatures of ground objects only in narrow, preselected bands of the electromagnetic spectrum. Its photos have applications in such areas as agriculture, forestry, water pollution investigations, soil analysis, geologic exploration, water depth studies and camouflage detection. The target scene is simultaneously photographed in four separate spectral bands. Using a multispectral viewer, such as their Model 75 Spectral Data creates a color image from the black and white positives taken by the camera. With this optical image analysis unit, all four bands are superimposed in accurate registration and illuminated with combinations of blue green, red, and white light. Best color combination for displaying the target object is selected and printed. Spectral Data Corporation produces several types of remote sensing equipment and also provides aerial survey, image processing and analysis and number of other remote sensing services.
Multichannel imager for littoral zone characterization
NASA Astrophysics Data System (ADS)
Podobna, Yuliya; Schoonmaker, Jon; Dirbas, Joe; Sofianos, James; Boucher, Cynthia; Gilbert, Gary
2010-04-01
This paper describes an approach to utilize a multi-channel, multi-spectral electro-optic (EO) system for littoral zone characterization. Advanced Coherent Technologies, LLC (ACT) presents their EO sensor systems for the surf zone environmental assessment and potential surf zone target detection. Specifically, an approach is presented to determine a Surf Zone Index (SZI) from the multi-spectral EO sensor system. SZI provides a single quantitative value of the surf zone conditions delivering an immediate understanding of the area and an assessment as to how well an airborne optical system might perform in a mine countermeasures (MCM) operation. Utilizing consecutive frames of SZI images, ACT is able to measure variability over time. A surf zone nomograph, which incorporates targets, sensor, and environmental data, including the SZI to determine the environmental impact on system performance, is reviewed in this work. ACT's electro-optical multi-channel, multi-spectral imaging system and test results are presented and discussed.
NASA Technical Reports Server (NTRS)
Pagnutti, Mary
2006-01-01
This viewgraph presentation reviews the creation of a prototype algorithm for atmospheric correction using high spatial resolution earth observing imaging systems. The objective of the work was to evaluate accuracy of a prototype algorithm that uses satellite-derived atmospheric products to generate scene reflectance maps for high spatial resolution (HSR) systems. This presentation focused on preliminary results of only the satellite-based atmospheric correction algorithm.
A PC-based multispectral scanner data evaluation workstation: Application to Daedalus scanners
NASA Technical Reports Server (NTRS)
Jedlovec, Gary J.; James, Mark W.; Smith, Matthew R.; Atkinson, Robert J.
1991-01-01
In late 1989, a personal computer (PC)-based data evaluation workstation was developed to support post flight processing of Multispectral Atmospheric Mapping Sensor (MAMS) data. The MAMS Quick View System (QVS) is an image analysis and display system designed to provide the capability to evaluate Daedalus scanner data immediately after an aircraft flight. Even in its original form, the QVS offered the portability of a personal computer with the advanced analysis and display features of a mainframe image analysis system. It was recognized, however, that the original QVS had its limitations, both in speed and processing of MAMS data. Recent efforts are presented that focus on overcoming earlier limitations and adapting the system to a new data tape structure. In doing so, the enhanced Quick View System (QVS2) will accommodate data from any of the four spectrometers used with the Daedalus scanner on the NASA ER2 platform. The QVS2 is designed around the AST 486/33 MHz CPU personal computer and comes with 10 EISA expansion slots, keyboard, and 4.0 mbytes of memory. Specialized PC-McIDAS software provides the main image analysis and display capability for the system. Image analysis and display of the digital scanner data is accomplished with PC-McIDAS software.
The precision-processing subsystem for the Earth Resources Technology Satellite.
NASA Technical Reports Server (NTRS)
Chapelle, W. E.; Bybee, J. E.; Bedross, G. M.
1972-01-01
Description of the precision processor, a subsystem in the image-processing system for the Earth Resources Technology Satellite (ERTS). This processor is a special-purpose image-measurement and printing system, designed to process user-selected bulk images to produce 1:1,000,000-scale film outputs and digital image data, presented in a Universal-Transverse-Mercator (UTM) projection. The system will remove geometric and radiometric errors introduced by the ERTS multispectral sensors and by the bulk-processor electron-beam recorder. The geometric transformations required for each input scene are determined by resection computations based on reseau measurements and image comparisons with a special ground-control base contained within the system; the images are then printed and digitized by electronic image-transfer techniques.
Techniques for automatic large scale change analysis of temporal multispectral imagery
NASA Astrophysics Data System (ADS)
Mercovich, Ryan A.
Change detection in remotely sensed imagery is a multi-faceted problem with a wide variety of desired solutions. Automatic change detection and analysis to assist in the coverage of large areas at high resolution is a popular area of research in the remote sensing community. Beyond basic change detection, the analysis of change is essential to provide results that positively impact an image analyst's job when examining potentially changed areas. Present change detection algorithms are geared toward low resolution imagery, and require analyst input to provide anything more than a simple pixel level map of the magnitude of change that has occurred. One major problem with this approach is that change occurs in such large volume at small spatial scales that a simple change map is no longer useful. This research strives to create an algorithm based on a set of metrics that performs a large area search for change in high resolution multispectral image sequences and utilizes a variety of methods to identify different types of change. Rather than simply mapping the magnitude of any change in the scene, the goal of this research is to create a useful display of the different types of change in the image. The techniques presented in this dissertation are used to interpret large area images and provide useful information to an analyst about small regions that have undergone specific types of change while retaining image context to make further manual interpretation easier. This analyst cueing to reduce information overload in a large area search environment will have an impact in the areas of disaster recovery, search and rescue situations, and land use surveys among others. By utilizing a feature based approach founded on applying existing statistical methods and new and existing topological methods to high resolution temporal multispectral imagery, a novel change detection methodology is produced that can automatically provide useful information about the change occurring in large area and high resolution image sequences. The change detection and analysis algorithm developed could be adapted to many potential image change scenarios to perform automatic large scale analysis of change.
Kahle, A.B.; Rowan, L.C.
1980-01-01
Six channels of moultispectral middle infrared (8 to 14 micrometres) aircraft scanner data were acquired over the East Tintic mining district, Utah. The digital image data were computer processed to create a color-composite image based on principal component transformations. When combined with a visible and near infrared color-composite image from a previous flight, with limited field checking, it is possible to discriminate quartzite, carbonate rocks, quartz latitic and quartz monzonitic rocks, latitic and monzonitic rocks, silicified altered rocks, argillized altered rocks, and vegetation. -from Authors
Snow Cover Mapping and Ice Avalanche Monitoring from the Satellite Data of the Sentinels
NASA Astrophysics Data System (ADS)
Wang, S.; Yang, B.; Zhou, Y.; Wang, F.; Zhang, R.; Zhao, Q.
2018-04-01
In order to monitor ice avalanches efficiently under disaster emergency conditions, a snow cover mapping method based on the satellite data of the Sentinels is proposed, in which the coherence and backscattering coefficient image of Synthetic Aperture Radar (SAR) data (Sentinel-1) is combined with the atmospheric correction result of multispectral data (Sentinel-2). The coherence image of the Sentinel-1 data could be segmented by a certain threshold to map snow cover, with the water bodies extracted from the backscattering coefficient image and removed from the coherence segment result. A snow confidence map from Sentinel-2 was used to map the snow cover, in which the confidence values of the snow cover were relatively high. The method can make full use of the acquired SAR image and multispectral image under emergency conditions, and the application potential of Sentinel data in the field of snow cover mapping is exploited. The monitoring frequency can be ensured because the areas obscured by thick clouds are remedied in the monitoring results. The Kappa coefficient of the monitoring results is 0.946, and the data processing time is less than 2 h, which meet the requirements of disaster emergency monitoring.
Preliminary PCA/TT Results on MRO CRISM Multispectral Images
NASA Astrophysics Data System (ADS)
Klassen, David R.; Smith, M. D.
2010-10-01
Mars Reconnaissance Orbiter arrived at Mars in March 2006 and by September had achieved its science-phase orbit with the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) beginning its visible to near-infrared (VIS/NIR) spectral imaging shortly thereafter. One goal of CRISM is to fill in the spatial gaps between the various targeted observations, eventually mapping the entire surface. Due to the large volume of data this would create, the instrument works in a reduced spectral sampling mode creating "multispectral” images. From these data we can create image cubes using 64 wavelengths from 0.410 to 3.923 µm. We present here our analysis of these multispectral mode data products using Principal Components Analysis (PCA) and Target Transformation (TT) [1]. Previous work with ground-based images [2-5] has shown that over an entire visible hemisphere, there are only three to four meaningful components using 32-105 wavelengths over 1.5-4.1 µm the first two are consistent over all temporal scales. The TT retrieved spectral endmembers show nearly the same level of consistency [5]. The preliminary work on the CRISM images cubes implies similar results; three to four significant principal components that are fairly consistent over time. These components are then used in TT to find spectral endmembers which can be used to characterize the surface reflectance for future use in radiative transfer cloud optical depth retrievals. We present here the PCA/TT results comparing the principal components and recovered endmembers from six reconstructed CRISM multi-spectral image cubes. References: [1] Bandfield, J. L., et al. (2000) JGR, 105, 9573. [2] Klassen, D. R. and Bell III, J. F. (2001) BAAS 33, 1069. [3] Klassen, D. R. and Bell III, J. F. (2003) BAAS, 35, 936. [4] Klassen, D. R., Wark, T. J., Cugliotta, C. G. (2005) BAAS, 37, 693. [5] Klassen, D. R. (2009) Icarus, 204, 32.
Hyperspectral discrimination of camouflaged target
NASA Astrophysics Data System (ADS)
Bárta, Vojtěch; Racek, František
2017-10-01
The article deals with detection of camouflaged objects during winter season. Winter camouflage is a marginal affair in most countries due to short time period of the snow cover. In the geographical condition of Central Europe the winter period with snow occurs less than 1/12 of year. The LWIR or SWIR spectral areas are used for detection of camouflaged objects. For those spectral regions the difference in chemical composition and temperature express in spectral features. Exploitation of the LWIR and SWIR devices is demanding due to their large dimension and expensiveness. Therefore, the article deals with estimation of utilization of VIS region for detecting of camouflaged object on snow background. The multispectral image output for the various spectral filters is simulated. Hyperspectral indices are determined to detect the camouflaged objects in the winter. The multispectral image simulation is based on the hyperspectral datacube obtained in real conditions.
Miniature spectrometer and multispectral imager as a potential diagnostic aid in dermatology
NASA Astrophysics Data System (ADS)
Zeng, Haishan; MacAulay, Calum E.; McLean, David I.; Lui, Harvey; Palcic, Branko
1995-04-01
A miniature spectrometer system has been constructed for both reflectance and autofluorescence spectral measurements of skin. The system is based on PC plug-in spectrometer, therefore, it is miniature and easy to operate. The spectrometer has been used clinically to collect spectral data from various skin lesions including skin cancer. To date, 48 patients with a total of 71 diseased skin sites have been measured. Analysis of these preliminary data suggests that unique spectral characteristics exist for certain types of skin lesions, i.e. seborrheic keratosis, psoriasis, etc.. These spectral characteristics will help the differential diagnosis in Dermatology practice. In conjunction with the spectral point measurements, we are building and testing a multispectral imaging system to measure the spatial distribution of skin reflectance and autofluorescence. Preliminary results indicate that a cutaneous squamous cell carcinoma has a weak autofluorescence signal at the edge of the lesion, but a higher autofluorescence signal in the central area.
Multispectral image sharpening using wavelet transform techniques and spatial correlation of edges
Lemeshewsky, George P.; Schowengerdt, Robert A.
2000-01-01
Several reported image fusion or sharpening techniques are based on the discrete wavelet transform (DWT). The technique described here uses a pixel-based maximum selection rule to combine respective transform coefficients of lower spatial resolution near-infrared (NIR) and higher spatial resolution panchromatic (pan) imagery to produce a sharpened NIR image. Sharpening assumes a radiometric correlation between the spectral band images. However, there can be poor correlation, including edge contrast reversals (e.g., at soil-vegetation boundaries), between the fused images and, consequently, degraded performance. To improve sharpening, a local area-based correlation technique originally reported for edge comparison with image pyramid fusion is modified for application with the DWT process. Further improvements are obtained by using redundant, shift-invariant implementation of the DWT. Example images demonstrate the improvements in NIR image sharpening with higher resolution pan imagery.
Radiometric characterization of hyperspectral imagers using multispectral sensors
NASA Astrophysics Data System (ADS)
McCorkel, Joel; Thome, Kurt; Leisso, Nathan; Anderson, Nikolaus; Czapla-Myers, Jeff
2009-08-01
The Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne and satellite based sensors. Often, ground-truth measurements at these tests sites are not always successful due to weather and funding availability. Therefore, RSG has also employed automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor. This work studies the feasibility of determining the radiometric calibration of a hyperspectral imager using multispectral imagery. The work relies on the Moderate Resolution Imaging Spectroradiometer (MODIS) as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. Hyperion bands are compared to MODIS by band averaging Hyperion's high spectral resolution data with the relative spectral response of MODIS. The results compare cross-calibration scenarios that differ in image acquisition coincidence, test site used for the calibration, and reference sensor. Cross-calibration results are presented that show agreement between the use of coincident and non-coincident image pairs within 2% in most bands as well as similar agreement between results that employ the different MODIS sensors as a reference.
Radiometric Characterization of Hyperspectral Imagers using Multispectral Sensors
NASA Technical Reports Server (NTRS)
McCorkel, Joel; Kurt, Thome; Leisso, Nathan; Anderson, Nikolaus; Czapla-Myers, Jeff
2009-01-01
The Remote Sensing Group (RSG) at the University of Arizona has a long history of using ground-based test sites for the calibration of airborne and satellite based sensors. Often, ground-truth measurements at these test sites are not always successful due to weather and funding availability. Therefore, RSG has also automated ground instrument approaches and cross-calibration methods to verify the radiometric calibration of a sensor. The goal in the cross-calibration method is to transfer the calibration of a well-known sensor to that of a different sensor, This work studies the feasibility of determining the radiometric calibration of a hyperspectral imager using multispectral a imagery. The work relies on the Moderate Resolution Imaging Spectroradiometer (M0DIS) as a reference for the hyperspectral sensor Hyperion. Test sites used for comparisons are Railroad Valley in Nevada and a portion of the Libyan Desert in North Africa. Hyperion bands are compared to MODIS by band averaging Hyperion's high spectral resolution data with the relative spectral response of M0DlS. The results compare cross-calibration scenarios that differ in image acquisition coincidence, test site used for the calibration, and reference sensor. Cross-calibration results are presented that show agreement between the use of coincident and non-coincident image pairs within 2% in most brands as well as similar agreement between results that employ the different MODIS sensors as a reference.
Snapshot spectral and polarimetric imaging; target identification with multispectral video
NASA Astrophysics Data System (ADS)
Bartlett, Brent D.; Rodriguez, Mikel D.
2013-05-01
As the number of pixels continue to grow in consumer and scientific imaging devices, it has become feasible to collect the incident light field. In this paper, an imaging device developed around light field imaging is used to collect multispectral and polarimetric imagery in a snapshot fashion. The sensor is described and a video data set is shown highlighting the advantage of snapshot spectral imaging. Several novel computer vision approaches are applied to the video cubes to perform scene characterization and target identification. It is shown how the addition of spectral and polarimetric data to the video stream allows for multi-target identification and tracking not possible with traditional RGB video collection.
NASA Astrophysics Data System (ADS)
Thompson, Nicholas Allan
2013-06-01
With recent developments in multispectral detector technology, the interest in common aperture, common focal plane multispectral imaging systems is increasing. Such systems are particularly desirable for military applications, where increased levels of target discrimination and identification are required in cost-effective, rugged, lightweight systems. During the optical design of dual waveband or multispectral systems, the options for material selection are limited. This selection becomes even more restrictive for military applications, where material resilience, thermal properties, and color correction must be considered. We discuss the design challenges that lightweight multispectral common aperture systems present, along with some potential design solutions. Consideration is given to material selection for optimum color correction, as well as material resilience and thermal correction. This discussion is supported using design examples currently in development at Qioptiq.
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Fuell, Kevin K.; Knaff, John; Lee, Thomas
2012-01-01
Current and future satellite sensors provide remotely sensed quantities from a variety of wavelengths ranging from the visible to the passive microwave, from both geostationary and low-Earth orbits. The NASA Short-term Prediction Research and Transition (SPoRT) Center has a long history of providing multispectral imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard NASA s Terra and Aqua satellites in support of NWS forecast office activities. Products from MODIS have recently been extended to include a broader suite of multispectral imagery similar to those developed by EUMETSAT, based upon the spectral channel s available from the Spinning Enhanced Visible and InfraRed Imager (SEVIRI) aboard METEOSAT-9. This broader suite includes products that discriminate between air mass types associated with synoptic-scale features, assists in the identification of dust, and improves upon paired channel difference detection of fog and low cloud events. Similarly, researchers at NOAA/NESDIS and CIRA have developed air mass discrimination capabilities using channels available from the current GOES Sounders. Other applications of multispectral composites include combinations of high and low frequency, horizontal and vertically polarized passive microwave brightness temperatures to discriminate tropical cyclone structures and other synoptic-scale features. Many of these capabilities have been transitioned for evaluation and operational use at NWS Weather Forecast Offices and National Centers through collaborations with SPoRT and CIRA. Future instruments will continue the availability of these products and also expand upon current capabilities. The Advanced Baseline Imager (ABI) on GOES-R will improve the spectral, spatial, and temporal resolution of our current geostationary capabilities, and the recent launch of the Suomi National Polar-Orbiting Partnership (S-NPP) carries instruments such as the Visible Infrared Imager Radiometer Suite (VIIRS), the Cross-track Infrared Sounder (CrIS), and the Advanced Technology Microwave Sounder (ATMS), which have unrivaled spectral and spatial resolution, as precursors to the JPSS era (i.e., the next generation of polar orbiting satellites). At the same time, new image manipulation and display capabilities are available within AWIPS II, the next generation of the NWS forecaster decision support system. This presentation will present a review of SPoRT, CIRA, and NRL collaborations regarding multispectral satellite imagery and articulate an integrated and collaborative path forward with Raytheon AWIPS II development staff for integrating current and future capabilities that support new satellite instrumentation and the AWIPS II decision support system.
The Beagle 2 Stereo Camera System: Scientific Objectives and Design Characteristics
NASA Astrophysics Data System (ADS)
Griffiths, A.; Coates, A.; Josset, J.; Paar, G.; Sims, M.
2003-04-01
The Stereo Camera System (SCS) will provide wide-angle (48 degree) multi-spectral stereo imaging of the Beagle 2 landing site in Isidis Planitia with an angular resolution of 0.75 milliradians. Based on the SpaceX Modular Micro-Imager, the SCS is composed of twin cameras (with 1024 by 1024 pixel frame transfer CCD) and twin filter wheel units (with a combined total of 24 filters). The primary mission objective is to construct a digital elevation model of the area in reach of the lander’s robot arm. The SCS specifications and following baseline studies are described: Panoramic RGB colour imaging of the landing site and panoramic multi-spectral imaging at 12 distinct wavelengths to study the mineralogy of landing site. Solar observations to measure water vapour absorption and the atmospheric dust optical density. Also envisaged are multi-spectral observations of Phobos &Deimos (observations of the moons relative to background stars will be used to determine the lander’s location and orientation relative to the Martian surface), monitoring of the landing site to detect temporal changes, observation of the actions and effects of the other PAW experiments (including rock texture studies with a close-up-lens) and collaborative observations with the Mars Express orbiter instrument teams. Due to be launched in May of this year, the total system mass is 360 g, the required volume envelope is 747 cm^3 and the average power consumption is 1.8 W. A 10Mbit/s RS422 bus connects each camera to the lander common electronics.