Spectral compression algorithms for the analysis of very large multivariate images
Keenan, Michael R.
2007-10-16
A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.
Spectral mapping tools from the earth sciences applied to spectral microscopy data.
Harris, A Thomas
2006-08-01
Spectral imaging, originating from the field of earth remote sensing, is a powerful tool that is being increasingly used in a wide variety of applications for material identification. Several workers have used techniques like linear spectral unmixing (LSU) to discriminate materials in images derived from spectral microscopy. However, many spectral analysis algorithms rely on assumptions that are often violated in microscopy applications. This study explores algorithms originally developed as improvements on early earth imaging techniques that can be easily translated for use with spectral microscopy. To best demonstrate the application of earth remote sensing spectral analysis tools to spectral microscopy data, earth imaging software was used to analyze data acquired with a Leica confocal microscope with mechanical spectral scanning. For this study, spectral training signatures (often referred to as endmembers) were selected with the ENVI (ITT Visual Information Solutions, Boulder, CO) "spectral hourglass" processing flow, a series of tools that use the spectrally over-determined nature of hyperspectral data to find the most spectrally pure (or spectrally unique) pixels within the data set. This set of endmember signatures was then used in the full range of mapping algorithms available in ENVI to determine locations, and in some cases subpixel abundances of endmembers. Mapping and abundance images showed a broad agreement between the spectral analysis algorithms, supported through visual assessment of output classification images and through statistical analysis of the distribution of pixels within each endmember class. The powerful spectral analysis algorithms available in COTS software, the result of decades of research in earth imaging, are easily translated to new sources of spectral data. Although the scale between earth imagery and spectral microscopy is radically different, the problem is the same: mapping material locations and abundances based on unique spectral signatures. (c) 2006 International Society for Analytical Cytology.
An Analysis of Periodic Components in BL Lac Object S5 0716 +714 with MUSIC Method
NASA Astrophysics Data System (ADS)
Tang, J.
2012-01-01
Multiple signal classification (MUSIC) algorithms are introduced to the estimation of the period of variation of BL Lac objects.The principle of MUSIC spectral analysis method and theoretical analysis of the resolution of frequency spectrum using analog signals are included. From a lot of literatures, we have collected a lot of effective observation data of BL Lac object S5 0716 + 714 in V, R, I bands from 1994 to 2008. The light variation periods of S5 0716 +714 are obtained by means of the MUSIC spectral analysis method and periodogram spectral analysis method. There exist two major periods: (3.33±0.08) years and (1.24±0.01) years for all bands. The estimation of the period of variation of the algorithm based on the MUSIC spectral analysis method is compared with that of the algorithm based on the periodogram spectral analysis method. It is a super-resolution algorithm with small data length, and could be used to detect the period of variation of weak signals.
Software algorithm and hardware design for real-time implementation of new spectral estimator
2014-01-01
Background Real-time spectral analyzers can be difficult to implement for PC computer-based systems because of the potential for high computational cost, and algorithm complexity. In this work a new spectral estimator (NSE) is developed for real-time analysis, and compared with the discrete Fourier transform (DFT). Method Clinical data in the form of 216 fractionated atrial electrogram sequences were used as inputs. The sample rate for acquisition was 977 Hz, or approximately 1 millisecond between digital samples. Real-time NSE power spectra were generated for 16,384 consecutive data points. The same data sequences were used for spectral calculation using a radix-2 implementation of the DFT. The NSE algorithm was also developed for implementation as a real-time spectral analyzer electronic circuit board. Results The average interval for a single real-time spectral calculation in software was 3.29 μs for NSE versus 504.5 μs for DFT. Thus for real-time spectral analysis, the NSE algorithm is approximately 150× faster than the DFT. Over a 1 millisecond sampling period, the NSE algorithm had the capability to spectrally analyze a maximum of 303 data channels, while the DFT algorithm could only analyze a single channel. Moreover, for the 8 second sequences, the NSE spectral resolution in the 3-12 Hz range was 0.037 Hz while the DFT spectral resolution was only 0.122 Hz. The NSE was also found to be implementable as a standalone spectral analyzer board using approximately 26 integrated circuits at a cost of approximately $500. The software files used for analysis are included as a supplement, please see the Additional files 1 and 2. Conclusions The NSE real-time algorithm has low computational cost and complexity, and is implementable in both software and hardware for 1 millisecond updates of multichannel spectra. The algorithm may be helpful to guide radiofrequency catheter ablation in real time. PMID:24886214
NASA Technical Reports Server (NTRS)
Smith, Michael D.; Bandfield, Joshua L.; Christensen, Philip R.
2000-01-01
We present two algorithms for the separation of spectral features caused by atmospheric and surface components in Thermal Emission Spectrometer (TES) data. One algorithm uses radiative transfer and successive least squares fitting to find spectral shapes first for atmospheric dust, then for water-ice aerosols, and then, finally, for surface emissivity. A second independent algorithm uses a combination of factor analysis, target transformation, and deconvolution to simultaneously find dust, water ice, and surface emissivity spectral shapes. Both algorithms have been applied to TES spectra, and both find very similar atmospheric and surface spectral shapes. For TES spectra taken during aerobraking and science phasing periods in nadir-geometry these two algorithms give meaningful and usable surface emissivity spectra that can be used for mineralogical identification.
Efficient geometric rectification techniques for spectral analysis algorithm
NASA Technical Reports Server (NTRS)
Chang, C. Y.; Pang, S. S.; Curlander, J. C.
1992-01-01
The spectral analysis algorithm is a viable technique for processing synthetic aperture radar (SAR) data in near real time throughput rates by trading the image resolution. One major challenge of the spectral analysis algorithm is that the output image, often referred to as the range-Doppler image, is represented in the iso-range and iso-Doppler lines, a curved grid format. This phenomenon is known to be the fanshape effect. Therefore, resampling is required to convert the range-Doppler image into a rectangular grid format before the individual images can be overlaid together to form seamless multi-look strip imagery. An efficient algorithm for geometric rectification of the range-Doppler image is presented. The proposed algorithm, realized in two one-dimensional resampling steps, takes into consideration the fanshape phenomenon of the range-Doppler image as well as the high squint angle and updates of the cross-track and along-track Doppler parameters. No ground reference points are required.
Algorithms for Spectral Decomposition with Applications to Optical Plume Anomaly Detection
NASA Technical Reports Server (NTRS)
Srivastava, Askok N.; Matthews, Bryan; Das, Santanu
2008-01-01
The analysis of spectral signals for features that represent physical phenomenon is ubiquitous in the science and engineering communities. There are two main approaches that can be taken to extract relevant features from these high-dimensional data streams. The first set of approaches relies on extracting features using a physics-based paradigm where the underlying physical mechanism that generates the spectra is used to infer the most important features in the data stream. We focus on a complementary methodology that uses a data-driven technique that is informed by the underlying physics but also has the ability to adapt to unmodeled system attributes and dynamics. We discuss the following four algorithms: Spectral Decomposition Algorithm (SDA), Non-Negative Matrix Factorization (NMF), Independent Component Analysis (ICA) and Principal Components Analysis (PCA) and compare their performance on a spectral emulator which we use to generate artificial data with known statistical properties. This spectral emulator mimics the real-world phenomena arising from the plume of the space shuttle main engine and can be used to validate the results that arise from various spectral decomposition algorithms and is very useful for situations where real-world systems have very low probabilities of fault or failure. Our results indicate that methods like SDA and NMF provide a straightforward way of incorporating prior physical knowledge while NMF with a tuning mechanism can give superior performance on some tests. We demonstrate these algorithms to detect potential system-health issues on data from a spectral emulator with tunable health parameters.
An Extended Spectral-Spatial Classification Approach for Hyperspectral Data
NASA Astrophysics Data System (ADS)
Akbari, D.
2017-11-01
In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.
SMV⊥: Simplex of maximal volume based upon the Gram-Schmidt process
NASA Astrophysics Data System (ADS)
Salazar-Vazquez, Jairo; Mendez-Vazquez, Andres
2015-10-01
In recent years, different algorithms for Hyperspectral Image (HI) analysis have been introduced. The high spectral resolution of these images allows to develop different algorithms for target detection, material mapping, and material identification for applications in Agriculture, Security and Defense, Industry, etc. Therefore, from the computer science's point of view, there is fertile field of research for improving and developing algorithms in HI analysis. In some applications, the spectral pixels of a HI can be classified using laboratory spectral signatures. Nevertheless, for many others, there is no enough available prior information or spectral signatures, making any analysis a difficult task. One of the most popular algorithms for the HI analysis is the N-FINDR because it is easy to understand and provides a way to unmix the original HI in the respective material compositions. The N-FINDR is computationally expensive and its performance depends on a random initialization process. This paper proposes a novel idea to reduce the complexity of the N-FINDR by implementing a bottom-up approach based in an observation from linear algebra and the use of the Gram-Schmidt process. Therefore, the Simplex of Maximal Volume Perpendicular (SMV⊥) algorithm is proposed for fast endmember extraction in hyperspectral imagery. This novel algorithm has complexity O(n) with respect to the number of pixels. In addition, the evidence shows that SMV⊥ calculates a bigger volume, and has lower computational time complexity than other poular algorithms on synthetic and real scenarios.
Statistical analysis and machine learning algorithms for optical biopsy
NASA Astrophysics Data System (ADS)
Wu, Binlin; Liu, Cheng-hui; Boydston-White, Susie; Beckman, Hugh; Sriramoju, Vidyasagar; Sordillo, Laura; Zhang, Chunyuan; Zhang, Lin; Shi, Lingyan; Smith, Jason; Bailin, Jacob; Alfano, Robert R.
2018-02-01
Analyzing spectral or imaging data collected with various optical biopsy methods is often times difficult due to the complexity of the biological basis. Robust methods that can utilize the spectral or imaging data and detect the characteristic spectral or spatial signatures for different types of tissue is challenging but highly desired. In this study, we used various machine learning algorithms to analyze a spectral dataset acquired from human skin normal and cancerous tissue samples using resonance Raman spectroscopy with 532nm excitation. The algorithms including principal component analysis, nonnegative matrix factorization, and autoencoder artificial neural network are used to reduce dimension of the dataset and detect features. A support vector machine with a linear kernel is used to classify the normal tissue and cancerous tissue samples. The efficacies of the methods are compared.
NASA Technical Reports Server (NTRS)
Hewes, C. R.; Brodersen, R. W.; De Wit, M.; Buss, D. D.
1976-01-01
Charge-coupled devices (CCDs) are ideally suited for performing sampled-data transversal filtering operations in the analog domain. Two algorithms have been identified for performing spectral analysis in which the bulk of the computation can be performed in a CCD transversal filter; the chirp z-transform and the prime transform. CCD implementation of both these transform algorithms is presented together with performance data and applications.
Yu, Shuang; Liu, Guo-hai; Xia, Rong-sheng; Jiang, Hui
2016-01-01
In order to achieve the rapid monitoring of process state of solid state fermentation (SSF), this study attempted to qualitative identification of process state of SSF of feed protein by use of Fourier transform near infrared (FT-NIR) spectroscopy analysis technique. Even more specifically, the FT-NIR spectroscopy combined with Adaboost-SRDA-NN integrated learning algorithm as an ideal analysis tool was used to accurately and rapidly monitor chemical and physical changes in SSF of feed protein without the need for chemical analysis. Firstly, the raw spectra of all the 140 fermentation samples obtained were collected by use of Fourier transform near infrared spectrometer (Antaris II), and the raw spectra obtained were preprocessed by use of standard normal variate transformation (SNV) spectral preprocessing algorithm. Thereafter, the characteristic information of the preprocessed spectra was extracted by use of spectral regression discriminant analysis (SRDA). Finally, nearest neighbors (NN) algorithm as a basic classifier was selected and building state recognition model to identify different fermentation samples in the validation set. Experimental results showed as follows: the SRDA-NN model revealed its superior performance by compared with other two different NN models, which were developed by use of the feature information form principal component analysis (PCA) and linear discriminant analysis (LDA), and the correct recognition rate of SRDA-NN model achieved 94.28% in the validation set. In this work, in order to further improve the recognition accuracy of the final model, Adaboost-SRDA-NN ensemble learning algorithm was proposed by integrated the Adaboost and SRDA-NN methods, and the presented algorithm was used to construct the online monitoring model of process state of SSF of feed protein. Experimental results showed as follows: the prediction performance of SRDA-NN model has been further enhanced by use of Adaboost lifting algorithm, and the correct recognition rate of the Adaboost-SRDA-NN model achieved 100% in the validation set. The overall results demonstrate that SRDA algorithm can effectively achieve the spectral feature information extraction to the spectral dimension reduction in model calibration process of qualitative analysis of NIR spectroscopy. In addition, the Adaboost lifting algorithm can improve the classification accuracy of the final model. The results obtained in this work can provide research foundation for developing online monitoring instruments for the monitoring of SSF process.
Fast Constrained Spectral Clustering and Cluster Ensemble with Random Projection
Liu, Wenfen
2017-01-01
Constrained spectral clustering (CSC) method can greatly improve the clustering accuracy with the incorporation of constraint information into spectral clustering and thus has been paid academic attention widely. In this paper, we propose a fast CSC algorithm via encoding landmark-based graph construction into a new CSC model and applying random sampling to decrease the data size after spectral embedding. Compared with the original model, the new algorithm has the similar results with the increase of its model size asymptotically; compared with the most efficient CSC algorithm known, the new algorithm runs faster and has a wider range of suitable data sets. Meanwhile, a scalable semisupervised cluster ensemble algorithm is also proposed via the combination of our fast CSC algorithm and dimensionality reduction with random projection in the process of spectral ensemble clustering. We demonstrate by presenting theoretical analysis and empirical results that the new cluster ensemble algorithm has advantages in terms of efficiency and effectiveness. Furthermore, the approximate preservation of random projection in clustering accuracy proved in the stage of consensus clustering is also suitable for the weighted k-means clustering and thus gives the theoretical guarantee to this special kind of k-means clustering where each point has its corresponding weight. PMID:29312447
NASA Technical Reports Server (NTRS)
Clark, Roger N.; Swayze, Gregg A.
1995-01-01
One of the challenges of Imaging Spectroscopy is the identification, mapping and abundance determination of materials, whether mineral, vegetable, or liquid, given enough spectral range, spectral resolution, signal to noise, and spatial resolution. Many materials show diagnostic absorption features in the visual and near infrared region (0.4 to 2.5 micrometers) of the spectrum. This region is covered by the modern imaging spectrometers such as AVIRIS. The challenge is to identify the materials from absorption bands in their spectra, and determine what specific analyses must be done to derive particular parameters of interest, ranging from simply identifying its presence to deriving its abundance, or determining specific chemistry of the material. Recently, a new analysis algorithm was developed that uses a digital spectral library of known materials and a fast, modified-least-squares method of determining if a single spectral feature for a given material is present. Clark et al. made another advance in the mapping algorithm: simultaneously mapping multiple minerals using multiple spectral features. This was done by a modified-least-squares fit of spectral features, from data in a digital spectral library, to corresponding spectral features in the image data. This version has now been superseded by a more comprehensive spectral analysis system called Tricorder.
Zhang, Hong-guang; Lu, Jian-gang
2016-02-01
Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.
NASA Astrophysics Data System (ADS)
Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang
2017-07-01
The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.
Exploratory Item Classification Via Spectral Graph Clustering
Chen, Yunxiao; Li, Xiaoou; Liu, Jingchen; Xu, Gongjun; Ying, Zhiliang
2017-01-01
Large-scale assessments are supported by a large item pool. An important task in test development is to assign items into scales that measure different characteristics of individuals, and a popular approach is cluster analysis of items. Classical methods in cluster analysis, such as the hierarchical clustering, K-means method, and latent-class analysis, often induce a high computational overhead and have difficulty handling missing data, especially in the presence of high-dimensional responses. In this article, the authors propose a spectral clustering algorithm for exploratory item cluster analysis. The method is computationally efficient, effective for data with missing or incomplete responses, easy to implement, and often outperforms traditional clustering algorithms in the context of high dimensionality. The spectral clustering algorithm is based on graph theory, a branch of mathematics that studies the properties of graphs. The algorithm first constructs a graph of items, characterizing the similarity structure among items. It then extracts item clusters based on the graphical structure, grouping similar items together. The proposed method is evaluated through simulations and an application to the revised Eysenck Personality Questionnaire. PMID:29033476
Fast Fourier Transform Spectral Analysis Program
NASA Technical Reports Server (NTRS)
Daniel, J. A., Jr.; Graves, M. L.; Hovey, N. M.
1969-01-01
Fast Fourier Transform Spectral Analysis Program is used in frequency spectrum analysis of postflight, space vehicle telemetered trajectory data. This computer program with a digital algorithm can calculate power spectrum rms amplitudes and cross spectrum of sampled parameters at even time increments.
Implementation of spectral clustering on microarray data of carcinoma using k-means algorithm
NASA Astrophysics Data System (ADS)
Frisca, Bustamam, Alhadi; Siswantining, Titin
2017-03-01
Clustering is one of data analysis methods that aims to classify data which have similar characteristics in the same group. Spectral clustering is one of the most popular modern clustering algorithms. As an effective clustering technique, spectral clustering method emerged from the concepts of spectral graph theory. Spectral clustering method needs partitioning algorithm. There are some partitioning methods including PAM, SOM, Fuzzy c-means, and k-means. Based on the research that has been done by Capital and Choudhury in 2013, when using Euclidian distance k-means algorithm provide better accuracy than PAM algorithm. So in this paper we use k-means as our partition algorithm. The major advantage of spectral clustering is in reducing data dimension, especially in this case to reduce the dimension of large microarray dataset. Microarray data is a small-sized chip made of a glass plate containing thousands and even tens of thousands kinds of genes in the DNA fragments derived from doubling cDNA. Application of microarray data is widely used to detect cancer, for the example is carcinoma, in which cancer cells express the abnormalities in his genes. The purpose of this research is to classify the data that have high similarity in the same group and the data that have low similarity in the others. In this research, Carcinoma microarray data using 7457 genes. The result of partitioning using k-means algorithm is two clusters.
NASA Astrophysics Data System (ADS)
Wu, Zhejun; Kudenov, Michael W.
2017-05-01
This paper presents a reconstruction algorithm for the Spatial-Spectral Multiplexing (SSM) optical system. The goal of this algorithm is to recover the three-dimensional spatial and spectral information of a scene, given that a one-dimensional spectrometer array is used to sample the pupil of the spatial-spectral modulator. The challenge of the reconstruction is that the non-parametric representation of the three-dimensional spatial and spectral object requires a large number of variables, thus leading to an underdetermined linear system that is hard to uniquely recover. We propose to reparameterize the spectrum using B-spline functions to reduce the number of unknown variables. Our reconstruction algorithm then solves the improved linear system via a least- square optimization of such B-spline coefficients with additional spatial smoothness regularization. The ground truth object and the optical model for the measurement matrix are simulated with both spatial and spectral assumptions according to a realistic field of view. In order to test the robustness of the algorithm, we add Poisson noise to the measurement and test on both two-dimensional and three-dimensional spatial and spectral scenes. Our analysis shows that the root mean square error of the recovered results can be achieved within 5.15%.
A wavelet and least square filter based spatial-spectral denoising approach of hyperspectral imagery
NASA Astrophysics Data System (ADS)
Li, Ting; Chen, Xiao-Mei; Chen, Gang; Xue, Bo; Ni, Guo-Qiang
2009-11-01
Noise reduction is a crucial step in hyperspectral imagery pre-processing. Based on sensor characteristics, the noise of hyperspectral imagery represents in both spatial and spectral domain. However, most prevailing denosing techniques process the imagery in only one specific domain, which have not utilized multi-domain nature of hyperspectral imagery. In this paper, a new spatial-spectral noise reduction algorithm is proposed, which is based on wavelet analysis and least squares filtering techniques. First, in the spatial domain, a new stationary wavelet shrinking algorithm with improved threshold function is utilized to adjust the noise level band-by-band. This new algorithm uses BayesShrink for threshold estimation, and amends the traditional soft-threshold function by adding shape tuning parameters. Comparing with soft or hard threshold function, the improved one, which is first-order derivable and has a smooth transitional region between noise and signal, could save more details of image edge and weaken Pseudo-Gibbs. Then, in the spectral domain, cubic Savitzky-Golay filter based on least squares method is used to remove spectral noise and artificial noise that may have been introduced in during the spatial denoising. Appropriately selecting the filter window width according to prior knowledge, this algorithm has effective performance in smoothing the spectral curve. The performance of the new algorithm is experimented on a set of Hyperion imageries acquired in 2007. The result shows that the new spatial-spectral denoising algorithm provides more significant signal-to-noise-ratio improvement than traditional spatial or spectral method, while saves the local spectral absorption features better.
Clark, Roger N.; Swayze, Gregg A.; Livo, K. Eric; Kokaly, Raymond F.; Sutley, Steve J.; Dalton, J. Brad; McDougal, Robert R.; Gent, Carol A.
2003-01-01
Imaging spectroscopy is a tool that can be used to spectrally identify and spatially map materials based on their specific chemical bonds. Spectroscopic analysis requires significantly more sophistication than has been employed in conventional broadband remote sensing analysis. We describe a new system that is effective at material identification and mapping: a set of algorithms within an expert system decision‐making framework that we call Tetracorder. The expertise in the system has been derived from scientific knowledge of spectral identification. The expert system rules are implemented in a decision tree where multiple algorithms are applied to spectral analysis, additional expert rules and algorithms can be applied based on initial results, and more decisions are made until spectral analysis is complete. Because certain spectral features are indicative of specific chemical bonds in materials, the system can accurately identify and map those materials. In this paper we describe the framework of the decision making process used for spectral identification, describe specific spectral feature analysis algorithms, and give examples of what analyses and types of maps are possible with imaging spectroscopy data. We also present the expert system rules that describe which diagnostic spectral features are used in the decision making process for a set of spectra of minerals and other common materials. We demonstrate the applications of Tetracorder to identify and map surface minerals, to detect sources of acid rock drainage, and to map vegetation species, ice, melting snow, water, and water pollution, all with one set of expert system rules. Mineral mapping can aid in geologic mapping and fault detection and can provide a better understanding of weathering, mineralization, hydrothermal alteration, and other geologic processes. Environmental site assessment, such as mapping source areas of acid mine drainage, has resulted in the acceleration of site cleanup, saving millions of dollars and years in cleanup time. Imaging spectroscopy data and Tetracorder analysis can be used to study both terrestrial and planetary science problems. Imaging spectroscopy can be used to probe planetary systems, including their atmospheres, oceans, and land surfaces.
Spectral analysis using the CCD Chirp Z-transform
NASA Technical Reports Server (NTRS)
Eversole, W. L.; Mayer, D. J.; Bosshart, P. W.; Dewit, M.; Howes, C. R.; Buss, D. D.
1978-01-01
The charge coupled device (CCD) Chirp Z transformation (CZT) spectral analysis techniques were reviewed and results on state-of-the-art CCD CZT technology are presented. The CZT algorithm was examined and the advantages of CCD implementation are discussed. The sliding CZT which is useful in many spectral analysis applications is described, and the performance limitations of the CZT are studied.
Fusion of spectral models for dynamic modeling of sEMG and skeletal muscle force.
Potluri, Chandrasekhar; Anugolu, Madhavi; Chiu, Steve; Urfer, Alex; Schoen, Marco P; Naidu, D Subbaram
2012-01-01
In this paper, we present a method of combining spectral models using a Kullback Information Criterion (KIC) data fusion algorithm. Surface Electromyographic (sEMG) signals and their corresponding skeletal muscle force signals are acquired from three sensors and pre-processed using a Half-Gaussian filter and a Chebyshev Type- II filter, respectively. Spectral models - Spectral Analysis (SPA), Empirical Transfer Function Estimate (ETFE), Spectral Analysis with Frequency Dependent Resolution (SPFRD) - are extracted from sEMG signals as input and skeletal muscle force as output signal. These signals are then employed in a System Identification (SI) routine to establish the dynamic models relating the input and output. After the individual models are extracted, the models are fused by a probability based KIC fusion algorithm. The results show that the SPFRD spectral models perform better than SPA and ETFE models in modeling the frequency content of the sEMG/skeletal muscle force data.
[A spatial adaptive algorithm for endmember extraction on multispectral remote sensing image].
Zhu, Chang-Ming; Luo, Jian-Cheng; Shen, Zhan-Feng; Li, Jun-Li; Hu, Xiao-Dong
2011-10-01
Due to the problem that the convex cone analysis (CCA) method can only extract limited endmember in multispectral imagery, this paper proposed a new endmember extraction method by spatial adaptive spectral feature analysis in multispectral remote sensing image based on spatial clustering and imagery slice. Firstly, in order to remove spatial and spectral redundancies, the principal component analysis (PCA) algorithm was used for lowering the dimensions of the multispectral data. Secondly, iterative self-organizing data analysis technology algorithm (ISODATA) was used for image cluster through the similarity of the pixel spectral. And then, through clustering post process and litter clusters combination, we divided the whole image data into several blocks (tiles). Lastly, according to the complexity of image blocks' landscape and the feature of the scatter diagrams analysis, the authors can determine the number of endmembers. Then using hourglass algorithm extracts endmembers. Through the endmember extraction experiment on TM multispectral imagery, the experiment result showed that the method can extract endmember spectra form multispectral imagery effectively. What's more, the method resolved the problem of the amount of endmember limitation and improved accuracy of the endmember extraction. The method has provided a new way for multispectral image endmember extraction.
Overlapping communities detection based on spectral analysis of line graphs
NASA Astrophysics Data System (ADS)
Gui, Chun; Zhang, Ruisheng; Hu, Rongjing; Huang, Guoming; Wei, Jiaxuan
2018-05-01
Community in networks are often overlapping where one vertex belongs to several clusters. Meanwhile, many networks show hierarchical structure such that community is recursively grouped into hierarchical organization. In order to obtain overlapping communities from a global hierarchy of vertices, a new algorithm (named SAoLG) is proposed to build the hierarchical organization along with detecting the overlap of community structure. SAoLG applies the spectral analysis into line graphs to unify the overlap and hierarchical structure of the communities. In order to avoid the limitation of absolute distance such as Euclidean distance, SAoLG employs Angular distance to compute the similarity between vertices. Furthermore, we make a micro-improvement partition density to evaluate the quality of community structure and use it to obtain the more reasonable and sensible community numbers. The proposed SAoLG algorithm achieves a balance between overlap and hierarchy by applying spectral analysis to edge community detection. The experimental results on one standard network and six real-world networks show that the SAoLG algorithm achieves higher modularity and reasonable community number values than those generated by Ahn's algorithm, the classical CPM and GN ones.
Impact of JPEG2000 compression on spatial-spectral endmember extraction from hyperspectral data
NASA Astrophysics Data System (ADS)
Martín, Gabriel; Ruiz, V. G.; Plaza, Antonio; Ortiz, Juan P.; García, Inmaculada
2009-08-01
Hyperspectral image compression has received considerable interest in recent years. However, an important issue that has not been investigated in the past is the impact of lossy compression on spectral mixture analysis applications, which characterize mixed pixels in terms of a suitable combination of spectrally pure spectral substances (called endmembers) weighted by their estimated fractional abundances. In this paper, we specifically investigate the impact of JPEG2000 compression of hyperspectral images on the quality of the endmembers extracted by algorithms that incorporate both the spectral and the spatial information (useful for incorporating contextual information in the spectral endmember search). The two considered algorithms are the automatic morphological endmember extraction (AMEE) and the spatial spectral endmember extraction (SSEE) techniques. Experimental results are conducted using a well-known data set collected by AVIRIS over the Cuprite mining district in Nevada and with detailed ground-truth information available from U. S. Geological Survey. Our experiments reveal some interesting findings that may be useful to specialists applying spatial-spectral endmember extraction algorithms to compressed hyperspectral imagery.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
NASA Astrophysics Data System (ADS)
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [11C]SCH23390 data, showing promising results.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.
Novosad, Philip; Reader, Andrew J
2016-06-21
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [(11)C]SCH23390 data, showing promising results.
Comparison of three methods for materials identification and mapping with imaging spectroscopy
NASA Technical Reports Server (NTRS)
Clark, Roger N.; Swayze, Gregg; Boardman, Joe; Kruse, Fred
1993-01-01
We are comparing three methods of mapping analysis tools for imaging spectroscopy data. The purpose of this comparison is to understand the advantages and disadvantages of each algorithm so others would be better able to choose the best algorithm or combinations of algorithms for a particular problem. The three algorithms are: (1) the spectralfeature modified least squares mapping algorithm of Clark et al (1990, 1991): programs mbandmap and tricorder; (2) the Spectral Angle Mapper Algorithm(Boardman, 1993) found in the CU CSES SIPS package; and (3) the Expert System of Kruse et al. (1993). The comparison uses a ground-calibrated 1990 AVIRIS scene of 400 by 410 pixels over Cuprite, Nevada. Along with the test data set is a spectral library of 38 minerals. Each algorithm is tested with the same AVIRIS data set and spectral library. Field work has confirmed the presence of many of these minerals in the AVIRIS scene (Swayze et al. 1992).
A wavelet transform algorithm for peak detection and application to powder x-ray diffraction data.
Gregoire, John M; Dale, Darren; van Dover, R Bruce
2011-01-01
Peak detection is ubiquitous in the analysis of spectral data. While many noise-filtering algorithms and peak identification algorithms have been developed, recent work [P. Du, W. Kibbe, and S. Lin, Bioinformatics 22, 2059 (2006); A. Wee, D. Grayden, Y. Zhu, K. Petkovic-Duran, and D. Smith, Electrophoresis 29, 4215 (2008)] has demonstrated that both of these tasks are efficiently performed through analysis of the wavelet transform of the data. In this paper, we present a wavelet-based peak detection algorithm with user-defined parameters that can be readily applied to the application of any spectral data. Particular attention is given to the algorithm's resolution of overlapping peaks. The algorithm is implemented for the analysis of powder diffraction data, and successful detection of Bragg peaks is demonstrated for both low signal-to-noise data from theta-theta diffraction of nanoparticles and combinatorial x-ray diffraction data from a composition spread thin film. These datasets have different types of background signals which are effectively removed in the wavelet-based method, and the results demonstrate that the algorithm provides a robust method for automated peak detection.
An Analysis of Light Periods of BL Lac Object S5 0716+714 with the MUSIC Algorithm
NASA Astrophysics Data System (ADS)
Tang, Jie
2012-07-01
The multiple signal classification (MUSIC) algorithm is introduced to the estimation of light periods of BL Lac objects. The principle of the MUSIC algorithm is given, together with a testing on its spectral resolution by using a simulative signal. From a lot of literature, we have collected a large number of effective observational data of the BL Lac object S5 0716+714 in the three optical wavebands V, R, and I from 1994 to 2008. The light periods of S5 0716+714 are obtained by means of the MUSIC algorithm and average periodogram algorithm, respectively. It is found that there exist two major periodic components, one is the period of (3.33±0.08) yr, another is the period of (1.24±0.01) yr. The comparison of the performances of periodicity analysis of two algorithms indicates that the MUSIC algorithm has a smaller requirement on the sample length, as well as a good spectral resolution and anti-noise ability, to improve the accuracy of periodicity analysis in the case of short sample length.
NASA Astrophysics Data System (ADS)
Polak, Mark L.; Hall, Jeffrey L.; Herr, Kenneth C.
1995-08-01
We present a ratioing algorithm for quantitative analysis of the passive Fourier-transform infrared spectrum of a chemical plume. We show that the transmission of a near-field plume is given by tau plume = (Lobsd - Lbb-plume)/(Lbkgd - Lbb-plume), where tau plume is the frequency-dependent transmission of the plume, L obsd is the spectral radiance of the scene that contains the plume, Lbkgd is the spectral radiance of the same scene without the plume, and Lbb-plume is the spectral radiance of a blackbody at the plume temperature. The algorithm simultaneously achieves background removal, elimination of the spectrometer internal signature, and quantification of the plume spectral transmission. It has applications to both real-time processing for plume visualization and quantitative measurements of plume column densities. The plume temperature (Lbb-plume ), which is not always precisely known, can have a profound effect on the quantitative interpretation of the algorithm and is discussed in detail. Finally, we provide an illustrative example of the use of the algorithm on a trichloroethylene and acetone plume.
[Estimation of Hunan forest carbon density based on spectral mixture analysis of MODIS data].
Yan, En-ping; Lin, Hui; Wang, Guang-xing; Chen, Zhen-xiong
2015-11-01
With the fast development of remote sensing technology, combining forest inventory sample plot data and remotely sensed images has become a widely used method to map forest carbon density. However, the existence of mixed pixels often impedes the improvement of forest carbon density mapping, especially when low spatial resolution images such as MODIS are used. In this study, MODIS images and national forest inventory sample plot data were used to conduct the study of estimation for forest carbon density. Linear spectral mixture analysis with and without constraint, and nonlinear spectral mixture analysis were compared to derive the fractions of different land use and land cover (LULC) types. Then sequential Gaussian co-simulation algorithm with and without the fraction images from spectral mixture analyses were employed to estimate forest carbon density of Hunan Province. Results showed that 1) Linear spectral mixture analysis with constraint, leading to a mean RMSE of 0.002, more accurately estimated the fractions of LULC types than linear spectral and nonlinear spectral mixture analyses; 2) Integrating spectral mixture analysis model and sequential Gaussian co-simulation algorithm increased the estimation accuracy of forest carbon density to 81.5% from 74.1%, and decreased the RMSE to 5.18 from 7.26; and 3) The mean value of forest carbon density for the province was 30.06 t · hm(-2), ranging from 0.00 to 67.35 t · hm(-2). This implied that the spectral mixture analysis provided a great potential to increase the estimation accuracy of forest carbon density on regional and global level.
NASA Astrophysics Data System (ADS)
Ren, Ruizhi; Gu, Lingjia; Fu, Haoyang; Sun, Chenglin
2017-04-01
An effective super-resolution (SR) algorithm is proposed for actual spectral remote sensing images based on sparse representation and wavelet preprocessing. The proposed SR algorithm mainly consists of dictionary training and image reconstruction. Wavelet preprocessing is used to establish four subbands, i.e., low frequency, horizontal, vertical, and diagonal high frequency, for an input image. As compared to the traditional approaches involving the direct training of image patches, the proposed approach focuses on the training of features derived from these four subbands. The proposed algorithm is verified using different spectral remote sensing images, e.g., moderate-resolution imaging spectroradiometer (MODIS) images with different bands, and the latest Chinese Jilin-1 satellite images with high spatial resolution. According to the visual experimental results obtained from the MODIS remote sensing data, the SR images using the proposed SR algorithm are superior to those using a conventional bicubic interpolation algorithm or traditional SR algorithms without preprocessing. Fusion algorithms, e.g., standard intensity-hue-saturation, principal component analysis, wavelet transform, and the proposed SR algorithms are utilized to merge the multispectral and panchromatic images acquired by the Jilin-1 satellite. The effectiveness of the proposed SR algorithm is assessed by parameters such as peak signal-to-noise ratio, structural similarity index, correlation coefficient, root-mean-square error, relative dimensionless global error in synthesis, relative average spectral error, spectral angle mapper, and the quality index Q4, and its performance is better than that of the standard image fusion algorithms.
Robust Multipoint Water-Fat Separation Using Fat Likelihood Analysis
Yu, Huanzhou; Reeder, Scott B.; Shimakawa, Ann; McKenzie, Charles A.; Brittain, Jean H.
2016-01-01
Fat suppression is an essential part of routine MRI scanning. Multiecho chemical-shift based water-fat separation methods estimate and correct for Bo field inhomogeneity. However, they must contend with the intrinsic challenge of water-fat ambiguity that can result in water-fat swapping. This problem arises because the signals from two chemical species, when both are modeled as a single discrete spectral peak, may appear indistinguishable in the presence of Bo off-resonance. In conventional methods, the water-fat ambiguity is typically removed by enforcing field map smoothness using region growing based algorithms. In reality, the fat spectrum has multiple spectral peaks. Using this spectral complexity, we introduce a novel concept that identifies water and fat for multiecho acquisitions by exploiting the spectral differences between water and fat. A fat likelihood map is produced to indicate if a pixel is likely to be water-dominant or fat-dominant by comparing the fitting residuals of two different signal models. The fat likelihood analysis and field map smoothness provide complementary information, and we designed an algorithm (Fat Likelihood Analysis for Multiecho Signals) to exploit both mechanisms. It is demonstrated in a wide variety of data that the Fat Likelihood Analysis for Multiecho Signals algorithm offers highly robust water-fat separation for 6-echo acquisitions, particularly in some previously challenging applications. PMID:21842498
NASA Technical Reports Server (NTRS)
Swayze, Gregg A.; Clark, Roger N.
1995-01-01
The rapid development of sophisticated imaging spectrometers and resulting flood of imaging spectrometry data has prompted a rapid parallel development of spectral-information extraction technology. Even though these extraction techniques have evolved along different lines (band-shape fitting, endmember unmixing, near-infrared analysis, neural-network fitting, and expert systems to name a few), all are limited by the spectrometer's signal to noise (S/N) and spectral resolution in producing useful information. This study grew from a need to quantitatively determine what effects these parameters have on our ability to differentiate between mineral absorption features using a band-shape fitting algorithm. We chose to evaluate the AVIRIS, HYDICE, MIVIS, GERIS, VIMS, NIMS, and ASTER instruments because they collect data over wide S/N and spectral-resolution ranges. The study evaluates the performance of the Tricorder algorithm, in differentiating between mineral spectra in the 0.4-2.5 micrometer spectral region. The strength of the Tricorder algorithm is in its ability to produce an easily understood comparison of band shape that can concentrate on small relevant portions of the spectra, giving it an advantage over most unmixing schemes, and in that it need not spend large amounts of time reoptimizing each time a new mineral component is added to its reference library, as is the case with neural-network schemes. We believe the flexibility of the Tricorder algorithm is unparalleled among spectral-extraction techniques and that the results from this study, although dealing with minerals, will have direct applications to spectral identification in other disciplines.
An analysis of spectral envelope-reduction via quadratic assignment problems
NASA Technical Reports Server (NTRS)
George, Alan; Pothen, Alex
1994-01-01
A new spectral algorithm for reordering a sparse symmetric matrix to reduce its envelope size was described. The ordering is computed by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. In this paper, we provide an analysis of the spectral envelope reduction algorithm. We described related 1- and 2-sum problems; the former is related to the envelope size, while the latter is related to an upper bound on the work involved in an envelope Cholesky factorization scheme. We formulate the latter two problems as quadratic assignment problems, and then study the 2-sum problem in more detail. We obtain lower bounds on the 2-sum by considering a projected quadratic assignment problem, and then show that finding a permutation matrix closest to an orthogonal matrix attaining one of the lower bounds justifies the spectral envelope reduction algorithm. The lower bound on the 2-sum is seen to be tight for reasonably 'uniform' finite element meshes. We also obtain asymptotically tight lower bounds for the envelope size for certain classes of meshes.
Power Spectral Density Error Analysis of Spectral Subtraction Type of Speech Enhancement Methods
NASA Astrophysics Data System (ADS)
Händel, Peter
2006-12-01
A theoretical framework for analysis of speech enhancement algorithms is introduced for performance assessment of spectral subtraction type of methods. The quality of the enhanced speech is related to physical quantities of the speech and noise (such as stationarity time and spectral flatness), as well as to design variables of the noise suppressor. The derived theoretical results are compared with the outcome of subjective listening tests as well as successful design strategies, performed by independent research groups.
DOE Office of Scientific and Technical Information (OSTI.GOV)
I. W. Ginsberg
Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The resultsmore » show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.« less
NASA Technical Reports Server (NTRS)
Lustman, L.
1984-01-01
An outline for spectral methods for partial differential equations is presented. The basic spectral algorithm is defined, collocation are emphasized and the main advantage of the method, the infinite order of accuracy in problems with smooth solutions are discussed. Examples of theoretical numerical analysis of spectral calculations are presented. An application of spectral methods to transonic flow is presented. The full potential transonic equation is among the best understood among nonlinear equations.
Leavesley, Silas J; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter; Rich, Thomas C
2018-01-01
Spectral imaging technologies have been used for many years by the remote sensing community. More recently, these approaches have been applied to biomedical problems, where they have shown great promise. However, biomedical spectral imaging has been complicated by the high variance of biological data and the reduced ability to construct test scenarios with fixed ground truths. Hence, it has been difficult to objectively assess and compare biomedical spectral imaging assays and technologies. Here, we present a standardized methodology that allows assessment of the performance of biomedical spectral imaging equipment, assays, and analysis algorithms. This methodology incorporates real experimental data and a theoretical sensitivity analysis, preserving the variability present in biomedical image data. We demonstrate that this approach can be applied in several ways: to compare the effectiveness of spectral analysis algorithms, to compare the response of different imaging platforms, and to assess the level of target signature required to achieve a desired performance. Results indicate that it is possible to compare even very different hardware platforms using this methodology. Future applications could include a range of optimization tasks, such as maximizing detection sensitivity or acquisition speed, providing high utility for investigators ranging from design engineers to biomedical scientists. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Chang, Bingguo; Chen, Xiaofei
2018-05-01
Ultrasonography is an important examination for the diagnosis of chronic liver disease. The doctor gives the liver indicators and suggests the patient's condition according to the description of ultrasound report. With the rapid increase in the amount of data of ultrasound report, the workload of professional physician to manually distinguish ultrasound results significantly increases. In this paper, we use the spectral clustering method to cluster analysis of the description of the ultrasound report, and automatically generate the ultrasonic diagnostic diagnosis by machine learning. 110 groups ultrasound examination report of chronic liver disease were selected as test samples in this experiment, and the results were validated by spectral clustering and compared with k-means clustering algorithm. The results show that the accuracy of spectral clustering is 92.73%, which is higher than that of k-means clustering algorithm, which provides a powerful ultrasound-assisted diagnosis for patients with chronic liver disease.
A framework for evaluating mixture analysis algorithms
NASA Astrophysics Data System (ADS)
Dasaratha, Sridhar; Vignesh, T. S.; Shanmukh, Sarat; Yarra, Malathi; Botonjic-Sehic, Edita; Grassi, James; Boudries, Hacene; Freeman, Ivan; Lee, Young K.; Sutherland, Scott
2010-04-01
In recent years, several sensing devices capable of identifying unknown chemical and biological substances have been commercialized. The success of these devices in analyzing real world samples is dependent on the ability of the on-board identification algorithm to de-convolve spectra of substances that are mixtures. To develop effective de-convolution algorithms, it is critical to characterize the relationship between the spectral features of a substance and its probability of detection within a mixture, as these features may be similar to or overlap with other substances in the mixture and in the library. While it has been recognized that these aspects pose challenges to mixture analysis, a systematic effort to quantify spectral characteristics and their impact, is generally lacking. In this paper, we propose metrics that can be used to quantify these spectral features. Some of these metrics, such as a modification of variance inflation factor, are derived from classical statistical measures used in regression diagnostics. We demonstrate that these metrics can be correlated to the accuracy of the substance's identification in a mixture. We also develop a framework for characterizing mixture analysis algorithms, using these metrics. Experimental results are then provided to show the application of this framework to the evaluation of various algorithms, including one that has been developed for a commercial device. The illustration is based on synthetic mixtures that are created from pure component Raman spectra measured on a portable device.
Spectral element multigrid. Part 2: Theoretical justification
NASA Technical Reports Server (NTRS)
Maday, Yvon; Munoz, Rafael
1988-01-01
A multigrid algorithm is analyzed which is used for solving iteratively the algebraic system resulting from tha approximation of a second order problem by spectral or spectral element methods. The analysis, performed here in the one dimensional case, justifies the good smoothing properties of the Jacobi preconditioner that was presented in Part 1 of this paper.
Kumar, Keshav
2017-11-01
Multivariate curve resolution alternating least square (MCR-ALS) analysis is the most commonly used curve resolution technique. The MCR-ALS model is fitted using the alternate least square (ALS) algorithm that needs initialisation of either contribution profiles or spectral profiles of each of the factor. The contribution profiles can be initialised using the evolve factor analysis; however, in principle, this approach requires that data must belong to the sequential process. The initialisation of the spectral profiles are usually carried out using the pure variable approach such as SIMPLISMA algorithm, this approach demands that each factor must have the pure variables in the data sets. Despite these limitations, the existing approaches have been quite a successful for initiating the MCR-ALS analysis. However, the present work proposes an alternate approach for the initialisation of the spectral variables by generating the random variables in the limits spanned by the maxima and minima of each spectral variable of the data set. The proposed approach does not require that there must be pure variables for each component of the multicomponent system or the concentration direction must follow the sequential process. The proposed approach is successfully validated using the excitation-emission matrix fluorescence data sets acquired for certain fluorophores with significant spectral overlap. The calculated contribution and spectral profiles of these fluorophores are found to correlate well with the experimental results. In summary, the present work proposes an alternate way to initiate the MCR-ALS analysis.
Radiation anomaly detection algorithms for field-acquired gamma energy spectra
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Sanjoy; Maurer, Richard; Wolff, Ron; Guss, Paul; Mitchell, Stephen
2015-08-01
The Remote Sensing Laboratory (RSL) is developing a tactical, networked radiation detection system that will be agile, reconfigurable, and capable of rapid threat assessment with high degree of fidelity and certainty. Our design is driven by the needs of users such as law enforcement personnel who must make decisions by evaluating threat signatures in urban settings. The most efficient tool available to identify the nature of the threat object is real-time gamma spectroscopic analysis, as it is fast and has a very low probability of producing false positive alarm conditions. Urban radiological searches are inherently challenged by the rapid and large spatial variation of background gamma radiation, the presence of benign radioactive materials in terms of the normally occurring radioactive materials (NORM), and shielded and/or masked threat sources. Multiple spectral anomaly detection algorithms have been developed by national laboratories and commercial vendors. For example, the Gamma Detector Response and Analysis Software (GADRAS) a one-dimensional deterministic radiation transport software capable of calculating gamma ray spectra using physics-based detector response functions was developed at Sandia National Laboratories. The nuisance-rejection spectral comparison ratio anomaly detection algorithm (or NSCRAD), developed at Pacific Northwest National Laboratory, uses spectral comparison ratios to detect deviation from benign medical and NORM radiation source and can work in spite of strong presence of NORM and or medical sources. RSL has developed its own wavelet-based gamma energy spectral anomaly detection algorithm called WAVRAD. Test results and relative merits of these different algorithms will be discussed and demonstrated.
NASA Astrophysics Data System (ADS)
Senthil Kumar, A.; Keerthi, V.; Manjunath, A. S.; Werff, Harald van der; Meer, Freek van der
2010-08-01
Classification of hyperspectral images has been receiving considerable attention with many new applications reported from commercial and military sectors. Hyperspectral images are composed of a large number of spectral channels, and have the potential to deliver a great deal of information about a remotely sensed scene. However, in addition to high dimensionality, hyperspectral image classification is compounded with a coarse ground pixel size of the sensor for want of adequate sensor signal to noise ratio within a fine spectral passband. This makes multiple ground features jointly occupying a single pixel. Spectral mixture analysis typically begins with pixel classification with spectral matching techniques, followed by the use of spectral unmixing algorithms for estimating endmembers abundance values in the pixel. The spectral matching techniques are analogous to supervised pattern recognition approaches, and try to estimate some similarity between spectral signatures of the pixel and reference target. In this paper, we propose a spectral matching approach by combining two schemes—variable interval spectral average (VISA) method and spectral curve matching (SCM) method. The VISA method helps to detect transient spectral features at different scales of spectral windows, while the SCM method finds a match between these features of the pixel and one of library spectra by least square fitting. Here we also compare the performance of the combined algorithm with other spectral matching techniques using a simulated and the AVIRIS hyperspectral data sets. Our results indicate that the proposed combination technique exhibits a stronger performance over the other methods in the classification of both the pure and mixed class pixels simultaneously.
Spatial compression algorithm for the analysis of very large multivariate images
Keenan, Michael R [Albuquerque, NM
2008-07-15
A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.
Cutrale, Francesco; Salih, Anya; Gratton, Enrico
2013-01-01
The phasor global analysis algorithm is common for fluorescence lifetime applications, but has only been recently proposed for spectral analysis. Here the phasor representation and fingerprinting is exploited in its second harmonic to determine the number and spectra of photo-activated states as well as their conversion dynamics. We follow the sequence of photo-activation of proteins over time by rapidly collecting multiple spectral images. The phasor representation of the cumulative images provides easy identification of the spectral signatures of each photo-activatable protein. PMID:24040513
Color analysis and image rendering of woodblock prints with oil-based ink
NASA Astrophysics Data System (ADS)
Horiuchi, Takahiko; Tanimoto, Tetsushi; Tominaga, Shoji
2012-01-01
This paper proposes a method for analyzing the color characteristics of woodblock prints having oil-based ink and rendering realistic images based on camera data. The analysis results of woodblock prints show some characteristic features in comparison with oil paintings: 1) A woodblock print can be divided into several cluster areas, each with similar surface spectral reflectance; and 2) strong specular reflection from the influence of overlapping paints arises only in specific cluster areas. By considering these properties, we develop an effective rendering algorithm by modifying our previous algorithm for oil paintings. A set of surface spectral reflectances of a woodblock print is represented by using only a small number of average surface spectral reflectances and the registered scaling coefficients, whereas the previous algorithm for oil paintings required surface spectral reflectances of high dimension at all pixels. In the rendering process, in order to reproduce the strong specular reflection in specific cluster areas, we use two sets of parameters in the Torrance-Sparrow model for cluster areas with or without strong specular reflection. An experiment on a woodblock printing with oil-based ink was performed to demonstrate the feasibility of the proposed method.
Water quality parameter measurement using spectral signatures
NASA Technical Reports Server (NTRS)
White, P. E.
1973-01-01
Regression analysis is applied to the problem of measuring water quality parameters from remote sensing spectral signature data. The equations necessary to perform regression analysis are presented and methods of testing the strength and reliability of a regression are described. An efficient algorithm for selecting an optimal subset of the independent variables available for a regression is also presented.
Skylab S-191 spectrometer single spectral scan analysis program. [user manual
NASA Technical Reports Server (NTRS)
Downes, E. L.
1974-01-01
Documentation and user information for the S-191 single spectral scan analysis program are reported. A breakdown of the computational algorithms is supplied, followed by the program listing and examples of sample output. A copy of the flow chart which describes the driver routine in the body of the main program segment is included.
Novel Spectral Representations and Sparsity-Driven Algorithms for Shape Modeling and Analysis
NASA Astrophysics Data System (ADS)
Zhong, Ming
In this dissertation, we focus on extending classical spectral shape analysis by incorporating spectral graph wavelets and sparsity-seeking algorithms. Defined with the graph Laplacian eigenbasis, the spectral graph wavelets are localized both in the vertex domain and graph spectral domain, and thus are very effective in describing local geometry. With a rich dictionary of elementary vectors and forcing certain sparsity constraints, a real life signal can often be well approximated by a very sparse coefficient representation. The many successful applications of sparse signal representation in computer vision and image processing inspire us to explore the idea of employing sparse modeling techniques with dictionary of spectral basis to solve various shape modeling problems. Conventional spectral mesh compression uses the eigenfunctions of mesh Laplacian as shape bases, which are highly inefficient in representing local geometry. To ameliorate, we advocate an innovative approach to 3D mesh compression using spectral graph wavelets as dictionary to encode mesh geometry. The spectral graph wavelets are locally defined at individual vertices and can better capture local shape information than Laplacian eigenbasis. The multi-scale SGWs form a redundant dictionary as shape basis, so we formulate the compression of 3D shape as a sparse approximation problem that can be readily handled by greedy pursuit algorithms. Surface inpainting refers to the completion or recovery of missing shape geometry based on the shape information that is currently available. We devise a new surface inpainting algorithm founded upon the theory and techniques of sparse signal recovery. Instead of estimating the missing geometry directly, our novel method is to find this low-dimensional representation which describes the entire original shape. More specifically, we find that, for many shapes, the vertex coordinate function can be well approximated by a very sparse coefficient representation with respect to the dictionary comprising its Laplacian eigenbasis, and it is then possible to recover this sparse representation from partial measurements of the original shape. Taking advantage of the sparsity cue, we advocate a novel variational approach for surface inpainting, integrating data fidelity constraints on the shape domain with coefficient sparsity constraints on the transformed domain. Because of the powerful properties of Laplacian eigenbasis, the inpainting results of our method tend to be globally coherent with the remaining shape. Informative and discriminative feature descriptors are vital in qualitative and quantitative shape analysis for a large variety of graphics applications. We advocate novel strategies to define generalized, user-specified features on shapes. Our new region descriptors are primarily built upon the coefficients of spectral graph wavelets that are both multi-scale and multi-level in nature, consisting of both local and global information. Based on our novel spectral feature descriptor, we developed a user-specified feature detection framework and a tensor-based shape matching algorithm. Through various experiments, we demonstrate the competitive performance of our proposed methods and the great potential of spectral basis and sparsity-driven methods for shape modeling.
NASA Astrophysics Data System (ADS)
Lazcano, R.; Madroñal, D.; Fabelo, H.; Ortega, S.; Salvador, R.; Callicó, G. M.; Juárez, E.; Sanz, C.
2017-10-01
Hyperspectral Imaging (HI) assembles high resolution spectral information from hundreds of narrow bands across the electromagnetic spectrum, thus generating 3D data cubes in which each pixel gathers the spectral information of the reflectance of every spatial pixel. As a result, each image is composed of large volumes of data, which turns its processing into a challenge, as performance requirements have been continuously tightened. For instance, new HI applications demand real-time responses. Hence, parallel processing becomes a necessity to achieve this requirement, so the intrinsic parallelism of the algorithms must be exploited. In this paper, a spatial-spectral classification approach has been implemented using a dataflow language known as RVCCAL. This language represents a system as a set of functional units, and its main advantage is that it simplifies the parallelization process by mapping the different blocks over different processing units. The spatial-spectral classification approach aims at refining the classification results previously obtained by using a K-Nearest Neighbors (KNN) filtering process, in which both the pixel spectral value and the spatial coordinates are considered. To do so, KNN needs two inputs: a one-band representation of the hyperspectral image and the classification results provided by a pixel-wise classifier. Thus, spatial-spectral classification algorithm is divided into three different stages: a Principal Component Analysis (PCA) algorithm for computing the one-band representation of the image, a Support Vector Machine (SVM) classifier, and the KNN-based filtering algorithm. The parallelization of these algorithms shows promising results in terms of computational time, as the mapping of them over different cores presents a speedup of 2.69x when using 3 cores. Consequently, experimental results demonstrate that real-time processing of hyperspectral images is achievable.
Li, Zenghui; Xu, Bin; Yang, Jian; Song, Jianshe
2015-01-01
This paper focuses on suppressing spectral overlap for sub-band spectral estimation, with which we can greatly decrease the computational complexity of existing spectral estimation algorithms, such as nonlinear least squares spectral analysis and non-quadratic regularized sparse representation. Firstly, our study shows that the nominal ability of the high-order analysis filter to suppress spectral overlap is greatly weakened when filtering a finite-length sequence, because many meaningless zeros are used as samples in convolution operations. Next, an extrapolation-based filtering strategy is proposed to produce a series of estimates as the substitutions of the zeros and to recover the suppression ability. Meanwhile, a steady-state Kalman predictor is applied to perform a linearly-optimal extrapolation. Finally, several typical methods for spectral analysis are applied to demonstrate the effectiveness of the proposed strategy. PMID:25609038
Alternative techniques for high-resolution spectral estimation of spectrally encoded endoscopy
NASA Astrophysics Data System (ADS)
Mousavi, Mahta; Duan, Lian; Javidi, Tara; Ellerbee, Audrey K.
2015-09-01
Spectrally encoded endoscopy (SEE) is a minimally invasive optical imaging modality capable of fast confocal imaging of internal tissue structures. Modern SEE systems use coherent sources to image deep within the tissue and data are processed similar to optical coherence tomography (OCT); however, standard processing of SEE data via the Fast Fourier Transform (FFT) leads to degradation of the axial resolution as the bandwidth of the source shrinks, resulting in a well-known trade-off between speed and axial resolution. Recognizing the limitation of FFT as a general spectral estimation algorithm to only take into account samples collected by the detector, in this work we investigate alternative high-resolution spectral estimation algorithms that exploit information such as sparsity and the general region position of the bulk sample to improve the axial resolution of processed SEE data. We validate the performance of these algorithms using bothMATLAB simulations and analysis of experimental results generated from a home-built OCT system to simulate an SEE system with variable scan rates. Our results open a new door towards using non-FFT algorithms to generate higher quality (i.e., higher resolution) SEE images at correspondingly fast scan rates, resulting in systems that are more accurate and more comfortable for patients due to the reduced image time.
Single-hole spectral function and spin-charge separation in the t-J model
NASA Astrophysics Data System (ADS)
Mishchenko, A. S.; Prokof'ev, N. V.; Svistunov, B. V.
2001-07-01
Worm algorithm Monte Carlo simulations of the hole Green function with subsequent spectral analysis were performed for 0.1<=J/t<=0.4 on lattices with up to L×L=32×32 sites at a temperature as low as T=J/40, and present, apparently, the hole spectral function in the thermodynamic limit. Spectral analysis reveals a δ-function-sharp quasiparticle peak at the lower edge of the spectrum that is incompatible with the power-law singularity and thus rules out the possibility of spin-charge separation in this parameter range. Spectral continuum features two peaks separated by a gap ~4÷5 t.
NASA Astrophysics Data System (ADS)
Witharana, Chandi; LaRue, Michelle A.; Lynch, Heather J.
2016-03-01
Remote sensing is a rapidly developing tool for mapping the abundance and distribution of Antarctic wildlife. While both panchromatic and multispectral imagery have been used in this context, image fusion techniques have received little attention. We tasked seven widely-used fusion algorithms: Ehlers fusion, hyperspherical color space fusion, high-pass fusion, principal component analysis (PCA) fusion, University of New Brunswick fusion, and wavelet-PCA fusion to resolution enhance a series of single-date QuickBird-2 and Worldview-2 image scenes comprising penguin guano, seals, and vegetation. Fused images were assessed for spectral and spatial fidelity using a variety of quantitative quality indicators and visual inspection methods. Our visual evaluation elected the high-pass fusion algorithm and the University of New Brunswick fusion algorithm as best for manual wildlife detection while the quantitative assessment suggested the Gram-Schmidt fusion algorithm and the University of New Brunswick fusion algorithm as best for automated classification. The hyperspherical color space fusion algorithm exhibited mediocre results in terms of spectral and spatial fidelities. The PCA fusion algorithm showed spatial superiority at the expense of spectral inconsistencies. The Ehlers fusion algorithm and the wavelet-PCA algorithm showed the weakest performances. As remote sensing becomes a more routine method of surveying Antarctic wildlife, these benchmarks will provide guidance for image fusion and pave the way for more standardized products for specific types of wildlife surveys.
Wang, Wei; Song, Wei-Guo; Liu, Shi-Xing; Zhang, Yong-Ming; Zheng, Hong-Yang; Tian, Wei
2011-04-01
An improved method for detecting cloud combining Kmeans clustering and the multi-spectral threshold approach is described. On the basis of landmark spectrum analysis, MODIS data is categorized into two major types initially by Kmeans method. The first class includes clouds, smoke and snow, and the second class includes vegetation, water and land. Then a multi-spectral threshold detection is applied to eliminate interference such as smoke and snow for the first class. The method is tested with MODIS data at different time under different underlying surface conditions. By visual method to test the performance of the algorithm, it was found that the algorithm can effectively detect smaller area of cloud pixels and exclude the interference of underlying surface, which provides a good foundation for the next fire detection approach.
Multi-species Identification of Polymorphic Peptide Variants via Propagation in Spectral Networks*
Bandeira, Nuno
2016-01-01
Peptide and protein identification remains challenging in organisms with poorly annotated or rapidly evolving genomes, as are commonly encountered in environmental or biofuels research. Such limitations render tandem mass spectrometry (MS/MS) database search algorithms ineffective as they lack corresponding sequences required for peptide-spectrum matching. We address this challenge with the spectral networks approach to (1) match spectra of orthologous peptides across multiple related species and then (2) propagate peptide annotations from identified to unidentified spectra. We here present algorithms to assess the statistical significance of spectral alignments (Align-GF), reduce the impurity in spectral networks, and accurately estimate the error rate in propagated identifications. Analyzing three related Cyanothece species, a model organism for biohydrogen production, spectral networks identified peptides from highly divergent sequences from networks with dozens of variant peptides, including thousands of peptides in species lacking a sequenced genome. Our analysis further detected the presence of many novel putative peptides even in genomically characterized species, thus suggesting the possibility of gaps in our understanding of their proteomic and genomic expression. A web-based pipeline for spectral networks analysis is available at http://proteomics.ucsd.edu/software. PMID:27609420
Evaluation of Algorithms for Compressing Hyperspectral Data
NASA Technical Reports Server (NTRS)
Cook, Sid; Harsanyi, Joseph; Faber, Vance
2003-01-01
With EO-1 Hyperion in orbit NASA is showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI spectral compression and Mapping Science (MSI) for JPEG 2000 spatial compression expertise, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor > 100, while retaining the necessary spectral and spatial fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our compression algorithms leverage commercial-off-the-shelf (COTS) spectral and spatial exploitation algorithms. We are currently in the process of evaluating these compression algorithms using statistical analysis and NASA scientists. We are also developing special purpose processors for executing these algorithms onboard a spacecraft.
Li, Qingli; Zhang, Jingfa; Wang, Yiting; Xu, Guoteng
2009-12-01
A molecular spectral imaging system has been developed based on microscopy and spectral imaging technology. The system is capable of acquiring molecular spectral images from 400 nm to 800 nm with 2 nm wavelength increments. The basic principles, instrumental systems, and system calibration method as well as its applications for the calculation of the stain-uptake by tissues are introduced. As a case study, the system is used for determining the pathogenesis of diabetic retinopathy and evaluating the therapeutic effects of erythropoietin. Some molecular spectral images of retinal sections of normal, diabetic, and treated rats were collected and analyzed. The typical transmittance curves of positive spots stained for albumin and advanced glycation end products are retrieved from molecular spectral data with the spectral response calibration algorithm. To explore and evaluate the protective effect of erythropoietin (EPO) on retinal albumin leakage of streptozotocin-induced diabetic rats, an algorithm based on Beer-Lambert's law is presented. The algorithm can assess the uptake by histologic retinal sections of stains used in quantitative pathology to label albumin leakage and advanced glycation end products formation. Experimental results show that the system is helpful for the ophthalmologist to reveal the pathogenesis of diabetic retinopathy and explore the protective effect of erythropoietin on retinal cells of diabetic rats. It also highlights the potential of molecular spectral imaging technology to provide more effective and reliable diagnostic criteria in pathology.
Saliency detection algorithm based on LSC-RC
NASA Astrophysics Data System (ADS)
Wu, Wei; Tian, Weiye; Wang, Ding; Luo, Xin; Wu, Yingfei; Zhang, Yu
2018-02-01
Image prominence is the most important region in an image, which can cause the visual attention and response of human beings. Preferentially allocating the computer resources for the image analysis and synthesis by the significant region is of great significance to improve the image area detecting. As a preprocessing of other disciplines in image processing field, the image prominence has widely applications in image retrieval and image segmentation. Among these applications, the super-pixel segmentation significance detection algorithm based on linear spectral clustering (LSC) has achieved good results. The significance detection algorithm proposed in this paper is better than the regional contrast ratio by replacing the method of regional formation in the latter with the linear spectral clustering image is super-pixel block. After combining with the latest depth learning method, the accuracy of the significant region detecting has a great promotion. At last, the superiority and feasibility of the super-pixel segmentation detection algorithm based on linear spectral clustering are proved by the comparative test.
Marker-Based Hierarchical Segmentation and Classification Approach for Hyperspectral Imagery
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Tilton, James C.; Benediktsson, Jon Atli; Chanussot, Jocelyn
2011-01-01
The Hierarchical SEGmentation (HSEG) algorithm, which is a combination of hierarchical step-wise optimization and spectral clustering, has given good performances for hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. First, pixelwise classification is performed and the most reliably classified pixels are selected as markers, with the corresponding class labels. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. The experimental results show that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for hyperspectral image analysis.
NASA Astrophysics Data System (ADS)
Hsiao, Y. R.; Tsai, C.
2017-12-01
As the WHO Air Quality Guideline indicates, ambient air pollution exposes world populations under threat of fatal symptoms (e.g. heart disease, lung cancer, asthma etc.), raising concerns of air pollution sources and relative factors. This study presents a novel approach to investigating the multiscale variations of PM2.5 in southern Taiwan over the past decade, with four meteorological influencing factors (Temperature, relative humidity, precipitation and wind speed),based on Noise-assisted Multivariate Empirical Mode Decomposition(NAMEMD) algorithm, Hilbert Spectral Analysis(HSA) and Time-dependent Intrinsic Correlation(TDIC) method. NAMEMD algorithm is a fully data-driven approach designed for nonlinear and nonstationary multivariate signals, and is performed to decompose multivariate signals into a collection of channels of Intrinsic Mode Functions (IMFs). TDIC method is an EMD-based method using a set of sliding window sizes to quantify localized correlation coefficients for multiscale signals. With the alignment property and quasi-dyadic filter bank of NAMEMD algorithm, one is able to produce same number of IMFs for all variables and estimates the cross correlation in a more accurate way. The performance of spectral representation of NAMEMD-HSA method is compared with Complementary Empirical Mode Decomposition/ Hilbert Spectral Analysis (CEEMD-HSA) and Wavelet Analysis. The nature of NAMAMD-based TDICC analysis is then compared with CEEMD-based TDIC analysis and the traditional correlation analysis.
Isotopic determination of uranium in soil by laser induced breakdown spectroscopy
Chan, George C. -Y.; Choi, Inhee; Mao, Xianglei; ...
2016-03-26
Laser-induced breakdown spectroscopy (LIBS) operated under ambient pressure has been evaluated for isotopic analysis of uranium in real-world samples such as soil, with U concentrations in the single digit percentage levels. The study addresses the requirements for spectral decomposition of 235U and 238U atomic emission peaks that are only partially resolved. Although non-linear least-square fitting algorithms are typically able to locate the optimal combination of fitting parameters that best describes the experimental spectrum even when all fitting parameters are treated as free independent variables, the analytical results of such an unconstrained free-parameter approach are ambiguous. In this work, five spectralmore » decomposition algorithms were examined, with different known physical properties (e.g., isotopic splitting, hyperfine structure) of the spectral lines sequentially incorporated into the candidate algorithms as constraints. It was found that incorporation of such spectral-line constraints into the decomposition algorithm is essential for the best isotopic analysis. The isotopic abundance of 235U was determined from a simple two-component Lorentzian fit on the U II 424.437 nm spectral profile. For six replicate measurements, each with only fifteen laser shots, on a soil sample with U concentration at 1.1% w/w, the determined 235U isotopic abundance was (64.6 ± 4.8)%, and agreed well with the certified value of 64.4%. Another studied U line - U I 682.691 nm possesses hyperfine structure that is comparatively broad and at a significant fraction as the isotopic shift. Thus, 235U isotopic analysis with this U I line was performed with spectral decomposition involving individual hyperfine components. For the soil sample with 1.1% w/w U, the determined 235U isotopic abundance was (60.9 ± 2.0)%, which exhibited a relative bias about 6% from the certified value. The bias was attributed to the spectral resolution of our measurement system - the measured line width for this U I line was larger than its isotopic splitting. In conclusion, although not the best emission line for isotopic analysis, this U I emission line is sensitive for element analysis with a detection limit of 500 ppm U in the soil matrix; the detection limit for the U II 424.437 nm line was 2000 ppm.« less
Frequency-domain nonlinear regression algorithm for spectral analysis of broadband SFG spectroscopy.
He, Yuhan; Wang, Ying; Wang, Jingjing; Guo, Wei; Wang, Zhaohui
2016-03-01
The resonant spectral bands of the broadband sum frequency generation (BB-SFG) spectra are often distorted by the nonresonant portion and the lineshapes of the laser pulses. Frequency domain nonlinear regression (FDNLR) algorithm was proposed to retrieve the first-order polarization induced by the infrared pulse and to improve the analysis of SFG spectra through simultaneous fitting of a series of time-resolved BB-SFG spectra. The principle of FDNLR was presented, and the validity and reliability were tested by the analysis of the virtual and measured SFG spectra. The relative phase, dephasing time, and lineshapes of the resonant vibrational SFG bands can be retrieved without any preset assumptions about the SFG bands and the incident laser pulses.
NASA Astrophysics Data System (ADS)
Chen, Yi; Ma, Yong; Lu, Zheng; Peng, Bei; Chen, Qin
2011-08-01
In the field of anti-illicit drug applications, many suspicious mixture samples might consist of various drug components—for example, a mixture of methamphetamine, heroin, and amoxicillin—which makes spectral identification very difficult. A terahertz spectroscopic quantitative analysis method using an adaptive range micro-genetic algorithm with a variable internal population (ARVIPɛμGA) has been proposed. Five mixture cases are discussed using ARVIPɛμGA driven quantitative terahertz spectroscopic analysis in this paper. The devised simulation results show agreement with the previous experimental results, which suggested that the proposed technique has potential applications for terahertz spectral identifications of drug mixture components. The results show agreement with the results obtained using other experimental and numerical techniques.
NASA Astrophysics Data System (ADS)
Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed
2016-03-01
Different chemometric models were applied for the quantitative analysis of amoxicillin (AMX), and flucloxacillin (FLX) in their binary mixtures, namely, partial least squares (PLS), spectral residual augmented classical least squares (SRACLS), concentration residual augmented classical least squares (CRACLS) and artificial neural networks (ANNs). All methods were applied with and without variable selection procedure (genetic algorithm GA). The methods were used for the quantitative analysis of the drugs in laboratory prepared mixtures and real market sample via handling the UV spectral data. Robust and simpler models were obtained by applying GA. The proposed methods were found to be rapid, simple and required no preliminary separation steps.
Model-based spectral estimation of Doppler signals using parallel genetic algorithms.
Solano González, J; Rodríguez Vázquez, K; García Nocetti, D F
2000-05-01
Conventional spectral analysis methods use a fast Fourier transform (FFT) on consecutive or overlapping windowed data segments. For Doppler ultrasound signals, this approach suffers from an inadequate frequency resolution due to the time segment duration and the non-stationarity characteristics of the signals. Parametric or model-based estimators can give significant improvements in the time-frequency resolution at the expense of a higher computational complexity. This work describes an approach which implements in real-time a parametric spectral estimator method using genetic algorithms (GAs) in order to find the optimum set of parameters for the adaptive filter that minimises the error function. The aim is to reduce the computational complexity of the conventional algorithm by using the simplicity associated to GAs and exploiting its parallel characteristics. This will allow the implementation of higher order filters, increasing the spectrum resolution, and opening a greater scope for using more complex methods.
Automated computation of autonomous spectral submanifolds for nonlinear modal analysis
NASA Astrophysics Data System (ADS)
Ponsioen, Sten; Pedergnana, Tiemo; Haller, George
2018-04-01
We discuss an automated computational methodology for computing two-dimensional spectral submanifolds (SSMs) in autonomous nonlinear mechanical systems of arbitrary degrees of freedom. In our algorithm, SSMs, the smoothest nonlinear continuations of modal subspaces of the linearized system, are constructed up to arbitrary orders of accuracy, using the parameterization method. An advantage of this approach is that the construction of the SSMs does not break down when the SSM folds over its underlying spectral subspace. A further advantage is an automated a posteriori error estimation feature that enables a systematic increase in the orders of the SSM computation until the required accuracy is reached. We find that the present algorithm provides a major speed-up, relative to numerical continuation methods, in the computation of backbone curves, especially in higher-dimensional problems. We illustrate the accuracy and speed of the automated SSM algorithm on lower- and higher-dimensional mechanical systems.
Research on hyperspectral dynamic scene and image sequence simulation
NASA Astrophysics Data System (ADS)
Sun, Dandan; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei
2016-10-01
This paper presents a simulation method of hyper-spectral dynamic scene and image sequence for hyper-spectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyper-spectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyper-spectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyper-spectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyper-spectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyper-spectral images are consistent with the theoretical analysis results.
Community structure from spectral properties in complex networks
NASA Astrophysics Data System (ADS)
Servedio, V. D. P.; Colaiori, F.; Capocci, A.; Caldarelli, G.
2005-06-01
We analyze the spectral properties of complex networks focusing on their relation to the community structure, and develop an algorithm based on correlations among components of different eigenvectors. The algorithm applies to general weighted networks, and, in a suitably modified version, to the case of directed networks. Our method allows to correctly detect communities in sharply partitioned graphs, however it is useful to the analysis of more complex networks, without a well defined cluster structure, as social and information networks. As an example, we test the algorithm on a large scale data-set from a psychological experiment of free word association, where it proves to be successful both in clustering words, and in uncovering mental association patterns.
NASA Astrophysics Data System (ADS)
Mirapeix, J.; García-Allende, P. B.; Cobo, A.; Conde, O.; López-Higuera, J. M.
2007-07-01
A new spectral processing technique designed for its application in the on-line detection and classification of arc-welding defects is presented in this paper. A non-invasive fiber sensor embedded within a TIG torch collects the plasma radiation originated during the welding process. The spectral information is then processed by means of two consecutive stages. A compression algorithm is first applied to the data allowing real-time analysis. The selected spectral bands are then used to feed a classification algorithm, which will be demonstrated to provide an efficient weld defect detection and classification. The results obtained with the proposed technique are compared to a similar processing scheme presented in a previous paper, giving rise to an improvement in the performance of the monitoring system.
Weighted graph cuts without eigenvectors a multilevel approach.
Dhillon, Inderjit S; Guan, Yuqiang; Kulis, Brian
2007-11-01
A variety of clustering algorithms have recently been proposed to handle data that is not linearly separable; spectral clustering and kernel k-means are two of the main methods. In this paper, we discuss an equivalence between the objective functions used in these seemingly different methods--in particular, a general weighted kernel k-means objective is mathematically equivalent to a weighted graph clustering objective. We exploit this equivalence to develop a fast, high-quality multilevel algorithm that directly optimizes various weighted graph clustering objectives, such as the popular ratio cut, normalized cut, and ratio association criteria. This eliminates the need for any eigenvector computation for graph clustering problems, which can be prohibitive for very large graphs. Previous multilevel graph partitioning methods, such as Metis, have suffered from the restriction of equal-sized clusters; our multilevel algorithm removes this restriction by using kernel k-means to optimize weighted graph cuts. Experimental results show that our multilevel algorithm outperforms a state-of-the-art spectral clustering algorithm in terms of speed, memory usage, and quality. We demonstrate that our algorithm is applicable to large-scale clustering tasks such as image segmentation, social network analysis and gene network analysis.
EXPLORING FUNCTIONAL CONNECTIVITY IN FMRI VIA CLUSTERING.
Venkataraman, Archana; Van Dijk, Koene R A; Buckner, Randy L; Golland, Polina
2009-04-01
In this paper we investigate the use of data driven clustering methods for functional connectivity analysis in fMRI. In particular, we consider the K-Means and Spectral Clustering algorithms as alternatives to the commonly used Seed-Based Analysis. To enable clustering of the entire brain volume, we use the Nyström Method to approximate the necessary spectral decompositions. We apply K-Means, Spectral Clustering and Seed-Based Analysis to resting-state fMRI data collected from 45 healthy young adults. Without placing any a priori constraints, both clustering methods yield partitions that are associated with brain systems previously identified via Seed-Based Analysis. Our empirical results suggest that clustering provides a valuable tool for functional connectivity analysis.
Arc-Welding Spectroscopic Monitoring based on Feature Selection and Neural Networks.
Garcia-Allende, P Beatriz; Mirapeix, Jesus; Conde, Olga M; Cobo, Adolfo; Lopez-Higuera, Jose M
2008-10-21
A new spectral processing technique designed for application in the on-line detection and classification of arc-welding defects is presented in this paper. A noninvasive fiber sensor embedded within a TIG torch collects the plasma radiation originated during the welding process. The spectral information is then processed in two consecutive stages. A compression algorithm is first applied to the data, allowing real-time analysis. The selected spectral bands are then used to feed a classification algorithm, which will be demonstrated to provide an efficient weld defect detection and classification. The results obtained with the proposed technique are compared to a similar processing scheme presented in previous works, giving rise to an improvement in the performance of the monitoring system.
NASA Astrophysics Data System (ADS)
Sitko, Rafał
2008-11-01
Knowledge of X-ray tube spectral distribution is necessary in theoretical methods of matrix correction, i.e. in both fundamental parameter (FP) methods and theoretical influence coefficient algorithms. Thus, the influence of X-ray tube distribution on the accuracy of the analysis of thin films and bulk samples is presented. The calculations are performed using experimental X-ray tube spectra taken from the literature and theoretical X-ray tube spectra evaluated by three different algorithms proposed by Pella et al. (X-Ray Spectrom. 14 (1985) 125-135), Ebel (X-Ray Spectrom. 28 (1999) 255-266), and Finkelshtein and Pavlova (X-Ray Spectrom. 28 (1999) 27-32). In this study, Fe-Cr-Ni system is selected as an example and the calculations are performed for X-ray tubes commonly applied in X-ray fluorescence analysis (XRF), i.e., Cr, Mo, Rh and W. The influence of X-ray tube spectra on FP analysis is evaluated when quantification is performed using various types of calibration samples. FP analysis of bulk samples is performed using pure-element bulk standards and multielement bulk standards similar to the analyzed material, whereas for FP analysis of thin films, the bulk and thin pure-element standards are used. For the evaluation of the influence of X-ray tube spectra on XRF analysis performed by theoretical influence coefficient methods, two algorithms for bulk samples are selected, i.e. Claisse-Quintin (Can. Spectrosc. 12 (1967) 129-134) and COLA algorithms (G.R. Lachance, Paper Presented at the International Conference on Industrial Inorganic Elemental Analysis, Metz, France, June 3, 1981) and two algorithms (constant and linear coefficients) for thin films recently proposed by Sitko (X-Ray Spectrom. 37 (2008) 265-272).
Radionuclide identification algorithm for organic scintillator-based radiation portal monitor
NASA Astrophysics Data System (ADS)
Paff, Marc Gerrit; Di Fulvio, Angela; Clarke, Shaun D.; Pozzi, Sara A.
2017-03-01
We have developed an algorithm for on-the-fly radionuclide identification for radiation portal monitors using organic scintillation detectors. The algorithm was demonstrated on experimental data acquired with our pedestrian portal monitor on moving special nuclear material and industrial sources at a purpose-built radiation portal monitor testing facility. The experimental data also included common medical isotopes. The algorithm takes the power spectral density of the cumulative distribution function of the measured pulse height distributions and matches these to reference spectra using a spectral angle mapper. F-score analysis showed that the new algorithm exhibited significant performance improvements over previously implemented radionuclide identification algorithms for organic scintillators. Reliable on-the-fly radionuclide identification would help portal monitor operators more effectively screen out the hundreds of thousands of nuisance alarms they encounter annually due to recent nuclear-medicine patients and cargo containing naturally occurring radioactive material. Portal monitor operators could instead focus on the rare but potentially high impact incidents of nuclear and radiological material smuggling detection for which portal monitors are intended.
NASA Astrophysics Data System (ADS)
Scanlan, Neil W.; Schott, John R.; Brown, Scott D.
2004-01-01
Synthetic imagery has traditionally been used to support sensor design by enabling design engineers to pre-evaluate image products during the design and development stages. Increasingly exploitation analysts are looking to synthetic imagery as a way to develop and test exploitation algorithms before image data are available from new sensors. Even when sensors are available, synthetic imagery can significantly aid in algorithm development by providing a wide range of "ground truthed" images with varying illumination, atmospheric, viewing and scene conditions. One limitation of synthetic data is that the background variability is often too bland. It does not exhibit the spatial and spectral variability present in real data. In this work, four fundamentally different texture modeling algorithms will first be implemented as necessary into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model environment. Two of the models to be tested are variants of a statistical Z-Score selection model, while the remaining two involve a texture synthesis and a spectral end-member fractional abundance map approach, respectively. A detailed comparative performance analysis of each model will then be carried out on several texturally significant regions of the resultant synthetic hyperspectral imagery. The quantitative assessment of each model will utilize a set of three peformance metrics that have been derived from spatial Gray Level Co-Occurrence Matrix (GLCM) analysis, hyperspectral Signal-to-Clutter Ratio (SCR) measures, and a new concept termed the Spectral Co-Occurrence Matrix (SCM) metric which permits the simultaneous measurement of spatial and spectral texture. Previous research efforts on the validation and performance analysis of texture characterization models have been largely qualitative in nature based on conducting visual inspections of synthetic textures in order to judge the degree of similarity to the original sample texture imagery. The quantitative measures used in this study will in combination attempt to determine which texture characterization models best capture the correct statistical and radiometric attributes of the corresponding real image textures in both the spatial and spectral domains. The motivation for this work is to refine our understanding of the complexities of texture phenomena so that an optimal texture characterization model that can accurately account for these complexities can be eventually implemented into a synthetic image generation (SIG) model. Further, conclusions will be drawn regarding which of the candidate texture models are able to achieve realistic levels of spatial and spectral clutter, thereby permitting more effective and robust testing of hyperspectral algorithms in synthetic imagery.
NASA Astrophysics Data System (ADS)
Scanlan, Neil W.; Schott, John R.; Brown, Scott D.
2003-12-01
Synthetic imagery has traditionally been used to support sensor design by enabling design engineers to pre-evaluate image products during the design and development stages. Increasingly exploitation analysts are looking to synthetic imagery as a way to develop and test exploitation algorithms before image data are available from new sensors. Even when sensors are available, synthetic imagery can significantly aid in algorithm development by providing a wide range of "ground truthed" images with varying illumination, atmospheric, viewing and scene conditions. One limitation of synthetic data is that the background variability is often too bland. It does not exhibit the spatial and spectral variability present in real data. In this work, four fundamentally different texture modeling algorithms will first be implemented as necessary into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model environment. Two of the models to be tested are variants of a statistical Z-Score selection model, while the remaining two involve a texture synthesis and a spectral end-member fractional abundance map approach, respectively. A detailed comparative performance analysis of each model will then be carried out on several texturally significant regions of the resultant synthetic hyperspectral imagery. The quantitative assessment of each model will utilize a set of three peformance metrics that have been derived from spatial Gray Level Co-Occurrence Matrix (GLCM) analysis, hyperspectral Signal-to-Clutter Ratio (SCR) measures, and a new concept termed the Spectral Co-Occurrence Matrix (SCM) metric which permits the simultaneous measurement of spatial and spectral texture. Previous research efforts on the validation and performance analysis of texture characterization models have been largely qualitative in nature based on conducting visual inspections of synthetic textures in order to judge the degree of similarity to the original sample texture imagery. The quantitative measures used in this study will in combination attempt to determine which texture characterization models best capture the correct statistical and radiometric attributes of the corresponding real image textures in both the spatial and spectral domains. The motivation for this work is to refine our understanding of the complexities of texture phenomena so that an optimal texture characterization model that can accurately account for these complexities can be eventually implemented into a synthetic image generation (SIG) model. Further, conclusions will be drawn regarding which of the candidate texture models are able to achieve realistic levels of spatial and spectral clutter, thereby permitting more effective and robust testing of hyperspectral algorithms in synthetic imagery.
NASA Astrophysics Data System (ADS)
Ma, Xiaoke; Wang, Bingbo; Yu, Liang
2018-01-01
Community detection is fundamental for revealing the structure-functionality relationship in complex networks, which involves two issues-the quantitative function for community as well as algorithms to discover communities. Despite significant research on either of them, few attempt has been made to establish the connection between the two issues. To attack this problem, a generalized quantification function is proposed for community in weighted networks, which provides a framework that unifies several well-known measures. Then, we prove that the trace optimization of the proposed measure is equivalent with the objective functions of algorithms such as nonnegative matrix factorization, kernel K-means as well as spectral clustering. It serves as the theoretical foundation for designing algorithms for community detection. On the second issue, a semi-supervised spectral clustering algorithm is developed by exploring the equivalence relation via combining the nonnegative matrix factorization and spectral clustering. Different from the traditional semi-supervised algorithms, the partial supervision is integrated into the objective of the spectral algorithm. Finally, through extensive experiments on both artificial and real world networks, we demonstrate that the proposed method improves the accuracy of the traditional spectral algorithms in community detection.
Simplification of multiple Fourier series - An example of algorithmic approach
NASA Technical Reports Server (NTRS)
Ng, E. W.
1981-01-01
This paper describes one example of multiple Fourier series which originate from a problem of spectral analysis of time series data. The example is exercised here with an algorithmic approach which can be generalized for other series manipulation on a computer. The generalized approach is presently pursued towards applications to a variety of multiple series and towards a general purpose algorithm for computer algebra implementation.
NASA Astrophysics Data System (ADS)
Ozeki, Yasuyuki; Otsuka, Yoichi; Sato, Shuya; Hashimoto, Hiroyuki; Umemura, Wataru; Sumimura, Kazuhiko; Nishizawa, Norihiko; Fukui, Kiichi; Itoh, Kazuyoshi
2013-02-01
We have developed a video-rate stimulated Raman scattering (SRS) microscope with frame-by-frame wavenumber tunability. The system uses a 76-MHz picosecond Ti:sapphire laser and a subharmonically synchronized, 38-MHz Yb fiber laser. The Yb fiber laser pulses are spectrally sliced by a fast wavelength-tunable filter, which consists of a galvanometer scanner, a 4-f optical system and a reflective grating. The spectral resolution of the filter is ~ 3 cm-1. The wavenumber was scanned from 2800 to 3100 cm-1 with an arbitrary waveform synchronized to the frame trigger. For imaging, we introduced a 8-kHz resonant scanner and a galvanometer scanner. We were able to acquire SRS images of 500 x 480 pixels at a frame rate of 30.8 frames/s. Then these images were processed by principal component analysis followed by a modified algorithm of independent component analysis. This algorithm allows blind separation of constituents with overlapping Raman bands from SRS spectral images. The independent component (IC) spectra give spectroscopic information, and IC images can be used to produce pseudo-color images. We demonstrate various label-free imaging modalities such as 2D spectral imaging of the rat liver, two-color 3D imaging of a vessel in the rat liver, and spectral imaging of several sections of intestinal villi in the mouse. Various structures in the tissues such as lipid droplets, cytoplasm, fibrous texture, nucleus, and water-rich region were successfully visualized.
SAR image change detection using watershed and spectral clustering
NASA Astrophysics Data System (ADS)
Niu, Ruican; Jiao, L. C.; Wang, Guiting; Feng, Jie
2011-12-01
A new method of change detection in SAR images based on spectral clustering is presented in this paper. Spectral clustering is employed to extract change information from a pair images acquired on the same geographical area at different time. Watershed transform is applied to initially segment the big image into non-overlapped local regions, leading to reduce the complexity. Experiments results and system analysis confirm the effectiveness of the proposed algorithm.
Spectral matching technology for light-emitting diode-based jaundice photodynamic therapy device
NASA Astrophysics Data System (ADS)
Gan, Ru-ting; Guo, Zhen-ning; Lin, Jie-ben
2015-02-01
The objective of this paper is to obtain the spectrum of light-emitting diode (LED)-based jaundice photodynamic therapy device (JPTD), the bilirubin absorption spectrum in vivo was regarded as target spectrum. According to the spectral constructing theory, a simple genetic algorithm as the spectral matching algorithm was first proposed in this study. The optimal combination ratios of LEDs were obtained, and the required LEDs number was then calculated. Meanwhile, the algorithm was compared with the existing spectral matching algorithms. The results show that this algorithm runs faster with higher efficiency, the switching time consumed is 2.06 s, and the fitting spectrum is very similar to the target spectrum with 98.15% matching degree. Thus, blue LED-based JPTD can replace traditional blue fluorescent tube, the spectral matching technology that has been put forward can be applied to the light source spectral matching for jaundice photodynamic therapy and other medical phototherapy.
Cloud parallel processing of tandem mass spectrometry based proteomics data.
Mohammed, Yassene; Mostovenko, Ekaterina; Henneman, Alex A; Marissen, Rob J; Deelder, André M; Palmblad, Magnus
2012-10-05
Data analysis in mass spectrometry based proteomics struggles to keep pace with the advances in instrumentation and the increasing rate of data acquisition. Analyzing this data involves multiple steps requiring diverse software, using different algorithms and data formats. Speed and performance of the mass spectral search engines are continuously improving, although not necessarily as needed to face the challenges of acquired big data. Improving and parallelizing the search algorithms is one possibility; data decomposition presents another, simpler strategy for introducing parallelism. We describe a general method for parallelizing identification of tandem mass spectra using data decomposition that keeps the search engine intact and wraps the parallelization around it. We introduce two algorithms for decomposing mzXML files and recomposing resulting pepXML files. This makes the approach applicable to different search engines, including those relying on sequence databases and those searching spectral libraries. We use cloud computing to deliver the computational power and scientific workflow engines to interface and automate the different processing steps. We show how to leverage these technologies to achieve faster data analysis in proteomics and present three scientific workflows for parallel database as well as spectral library search using our data decomposition programs, X!Tandem and SpectraST.
NASA Astrophysics Data System (ADS)
Zhang, Xing; Wen, Gongjian
2015-10-01
Anomaly detection (AD) becomes increasingly important in hyperspectral imagery analysis with many practical applications. Local orthogonal subspace projection (LOSP) detector is a popular anomaly detector which exploits local endmembers/eigenvectors around the pixel under test (PUT) to construct background subspace. However, this subspace only takes advantage of the spectral information, but the spatial correlat ion of the background clutter is neglected, which leads to the anomaly detection result sensitive to the accuracy of the estimated subspace. In this paper, a local three dimensional orthogonal subspace projection (3D-LOSP) algorithm is proposed. Firstly, under the jointly use of both spectral and spatial information, three directional background subspaces are created along the image height direction, the image width direction and the spectral direction, respectively. Then, the three corresponding orthogonal subspaces are calculated. After that, each vector along three direction of the local cube is projected onto the corresponding orthogonal subspace. Finally, a composite score is given through the three direction operators. In 3D-LOSP, the anomalies are redefined as the target not only spectrally different to the background, but also spatially distinct. Thanks to the addition of the spatial information, the robustness of the anomaly detection result has been improved greatly by the proposed 3D-LOSP algorithm. It is noteworthy that the proposed algorithm is an expansion of LOSP and this ideology can inspire many other spectral-based anomaly detection methods. Experiments with real hyperspectral images have proved the stability of the detection result.
Onboard spectral imager data processor
NASA Astrophysics Data System (ADS)
Otten, Leonard J.; Meigs, Andrew D.; Franklin, Abraham J.; Sears, Robert D.; Robison, Mark W.; Rafert, J. Bruce; Fronterhouse, Donald C.; Grotbeck, Ronald L.
1999-10-01
Previous papers have described the concept behind the MightySat II.1 program, the satellite's Fourier Transform imaging spectrometer's optical design, the design for the spectral imaging payload, and its initial qualification testing. This paper discusses the on board data processing designed to reduce the amount of downloaded data by an order of magnitude and provide a demonstration of a smart spaceborne spectral imaging sensor. Two custom components, a spectral imager interface 6U VME card that moves data at over 30 MByte/sec, and four TI C-40 processors mounted to a second 6U VME and daughter card, are used to adapt the sensor to the spacecraft and provide the necessary high speed processing. A system architecture that offers both on board real time image processing and high-speed post data collection analysis of the spectral data has been developed. In addition to the on board processing of the raw data into a usable spectral data volume, one feature extraction technique has been incorporated. This algorithm operates on the basic interferometric data. The algorithm is integrated within the data compression process to search for uploadable feature descriptions.
Spectral Analysis of Breast Cancer on Tissue Microarrays: Seeing Beyond Morphology
2005-04-01
Harvey N., Szymanski J.J., Bloch J.J., Mitchell M. investigation of image feature extraction by a genetic algorithm. Proc. SPIE 1999;3812:24-31. 11...automated feature extraction using multiple data sources. Proc. SPIE 2003;5099:190-200. 15 4 Spectral-Spatial Analysis of Urine Cytology Angeletti et al...Appendix Contents: 1. Harvey, N.R., Levenson, R.M., Rimm, D.L. (2003) Investigation of Automated Feature Extraction Techniques for Applications in
Daytime sea fog retrieval based on GOCI data: a case study over the Yellow Sea.
Yuan, Yibo; Qiu, Zhongfeng; Sun, Deyong; Wang, Shengqiang; Yue, Xiaoyuan
2016-01-25
In this paper, a new daytime sea fog detection algorithm has been developed by using Geostationary Ocean Color Imager (GOCI) data. Based on spectral analysis, differences in spectral characteristics were found over different underlying surfaces, which include land, sea, middle/high level clouds, stratus clouds and sea fog. Statistical analysis showed that the Rrc (412 nm) (Rayleigh Corrected Reflectance) of sea fog pixels is approximately 0.1-0.6. Similarly, various band combinations could be used to separate different surfaces. Therefore, three indices (SLDI, MCDI and BSI) were set to discern land/sea, middle/high level clouds and fog/stratus clouds, respectively, from which it was generally easy to extract fog pixels. The remote sensing algorithm was verified using coastal sounding data, which demonstrated that the algorithm had the ability to detect sea fog. The algorithm was then used to monitor an 8-hour sea fog event and the results were consistent with observational data from buoys data deployed near the Sheyang coast (121°E, 34°N). The goal of this study was to establish a daytime sea fog detection algorithm based on GOCI data, which shows promise for detecting fog separately from stratus.
Hyperspectral Remote Sensing of Terrestrial Ecosystem Productivity from ISS
NASA Astrophysics Data System (ADS)
Huemmrich, K. F.; Campbell, P. K. E.; Gao, B. C.; Flanagan, L. B.; Goulden, M.
2017-12-01
Data from the Hyperspectral Imager for Coastal Ocean (HICO), mounted on the International Space Station (ISS), were used to develop and test algorithms for remotely retrieving ecosystem productivity. The ISS orbit introduces both limitations and opportunities for observing ecosystem dynamics. Twenty six HICO images were used from four study sites representing different vegetation types: grasslands, shrubland, and forest. Gross ecosystem production (GEP) data from eddy covariance were matched with HICO-derived spectra. Multiple algorithms were successful relating spectral reflectance with GEP, including: Spectral Vegetation Indices (SVI), SVI in a light use efficiency model framework, spectral shape characteristics through spectral derivatives and absorption feature analysis, and statistical models leading to Multiband Hyperspectral Indices (MHI) from stepwise regressions and Partial Least Squares Regression (PLSR). Algorithms were able to achieve r2 better than 0.7 for both GEP at the overpass time and daily GEP. These algorithms were successful using a diverse set of observations combining data from multiple years, multiple times during growing season, different times of day, with different view angles, and different vegetation types. The demonstrated robustness of the algorithms presented in this study over these conditions provides some confidence in mapping spatial patterns of GEP, describing variability within fields as well as the regional patterns based only on spectral reflectance information. The ISS orbit provides periods with multiple observations collected at different times of the day within a period of a few days. Diurnal GEP patterns were estimated comparing the half-hourly average GEP from the flux tower against HICO estimates of GEP (r2=0.87) if morning, midday, and afternoon observations were available for average fluxes in the time period.
Network Data: Statistical Theory and New Models
2016-02-17
SECURITY CLASSIFICATION OF: During this period of review, Bin Yu worked on many thrusts of high-dimensional statistical theory and methodologies. Her...research covered a wide range of topics in statistics including analysis and methods for spectral clustering for sparse and structured networks...2,7,8,21], sparse modeling (e.g. Lasso) [4,10,11,17,18,19], statistical guarantees for the EM algorithm [3], statistical analysis of algorithm leveraging
NASA Astrophysics Data System (ADS)
Zhang, Aizhu; Sun, Genyun; Wang, Zhenjie
2015-12-01
The serious information redundancy in hyperspectral images (HIs) cannot contribute to the data analysis accuracy, instead it require expensive computational resources. Consequently, to identify the most useful and valuable information from the HIs, thereby improve the accuracy of data analysis, this paper proposed a novel hyperspectral band selection method using the hybrid genetic algorithm and gravitational search algorithm (GA-GSA). In the proposed method, the GA-GSA is mapped to the binary space at first. Then, the accuracy of the support vector machine (SVM) classifier and the number of selected spectral bands are utilized to measure the discriminative capability of the band subset. Finally, the band subset with the smallest number of spectral bands as well as covers the most useful and valuable information is obtained. To verify the effectiveness of the proposed method, studies conducted on an AVIRIS image against two recently proposed state-of-the-art GSA variants are presented. The experimental results revealed the superiority of the proposed method and indicated that the method can indeed considerably reduce data storage costs and efficiently identify the band subset with stable and high classification precision.
Spectral analysis of major heart tones
NASA Astrophysics Data System (ADS)
Lejkowski, W.; Dobrowolski, A. P.; Majka, K.; Olszewski, R.
2018-04-01
The World Health Organization (WHO) figures clearly indicate that cardiovascular disease is the most common cause of death and disability in the world. Early detection of cardiovascular pathologies may contribute to reducing such a high mortality rate. Auscultatory examination is one of the first and most important step in cardiologic diagnostics. Unfortunately, proper diagnosis is closely related to long-term practice and medical experience. The article presents the author's system of recording phonocardiograms and the way of saving data, as well as the outline of the analysis algorithm, which will allow to assign a case to a patient with heart failure or healthy voluntaries' with a certain high probability. The results of a pilot study of phonocardiographic signals were also presented as an introduction to further research aimed at the development of an efficient diagnostic algorithm based on spectral analysis of the heart tone.
NASA Astrophysics Data System (ADS)
Wang, Ke; Guo, Ping; Luo, A.-Li
2017-03-01
Spectral feature extraction is a crucial procedure in automated spectral analysis. This procedure starts from the spectral data and produces informative and non-redundant features, facilitating the subsequent automated processing and analysis with machine-learning and data-mining techniques. In this paper, we present a new automated feature extraction method for astronomical spectra, with application in spectral classification and defective spectra recovery. The basic idea of our approach is to train a deep neural network to extract features of spectra with different levels of abstraction in different layers. The deep neural network is trained with a fast layer-wise learning algorithm in an analytical way without any iterative optimization procedure. We evaluate the performance of the proposed scheme on real-world spectral data. The results demonstrate that our method is superior regarding its comprehensive performance, and the computational cost is significantly lower than that for other methods. The proposed method can be regarded as a new valid alternative general-purpose feature extraction method for various tasks in spectral data analysis.
Speed and accuracy improvements in FLAASH atmospheric correction of hyperspectral imagery
NASA Astrophysics Data System (ADS)
Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael W.; Berk, Alexander; Bernstein, Lawrence S.; Lee, Jamine; Fox, Marsha
2012-11-01
Remotely sensed spectral imagery of the earth's surface can be used to fullest advantage when the influence of the atmosphere has been removed and the measurements are reduced to units of reflectance. Here, we provide a comprehensive summary of the latest version of the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes atmospheric correction algorithm. We also report some new code improvements for speed and accuracy. These include the re-working of the original algorithm in C-language code parallelized with message passing interface and containing a new radiative transfer look-up table option, which replaces executions of the MODTRAN model. With computation times now as low as ~10 s per image per computer processor, automated, real-time, on-board atmospheric correction of hyper- and multi-spectral imagery is within reach.
NASA Technical Reports Server (NTRS)
Brumfield, J. O.; Bloemer, H. H. L.; Campbell, W. J.
1981-01-01
Two unsupervised classification procedures for analyzing Landsat data used to monitor land reclamation in a surface mining area in east central Ohio are compared for agreement with data collected from the corresponding locations on the ground. One procedure is based on a traditional unsupervised-clustering/maximum-likelihood algorithm sequence that assumes spectral groupings in the Landsat data in n-dimensional space; the other is based on a nontraditional unsupervised-clustering/canonical-transformation/clustering algorithm sequence that not only assumes spectral groupings in n-dimensional space but also includes an additional feature-extraction technique. It is found that the nontraditional procedure provides an appreciable improvement in spectral groupings and apparently increases the level of accuracy in the classification of land cover categories.
Detection of Lettuce Discoloration Using Hyperspectral Reflectance Imaging
Mo, Changyeun; Kim, Giyoung; Lim, Jongguk; Kim, Moon S.; Cho, Hyunjeong; Cho, Byoung-Kwan
2015-01-01
Rapid visible/near-infrared (VNIR) hyperspectral imaging methods, employing both a single waveband algorithm and multi-spectral algorithms, were developed in order to discrimination between sound and discolored lettuce. Reflectance spectra for sound and discolored lettuce surfaces were extracted from hyperspectral reflectance images obtained in the 400–1000 nm wavelength range. The optimal wavebands for discriminating between discolored and sound lettuce surfaces were determined using one-way analysis of variance. Multi-spectral imaging algorithms developed using ratio and subtraction functions resulted in enhanced classification accuracy of above 99.9% for discolored and sound areas on both adaxial and abaxial lettuce surfaces. Ratio imaging (RI) and subtraction imaging (SI) algorithms at wavelengths of 552/701 nm and 557–701 nm, respectively, exhibited better classification performances compared to results obtained for all possible two-waveband combinations. These results suggest that hyperspectral reflectance imaging techniques can potentially be used to discriminate between discolored and sound fresh-cut lettuce. PMID:26610510
Detection of Lettuce Discoloration Using Hyperspectral Reflectance Imaging.
Mo, Changyeun; Kim, Giyoung; Lim, Jongguk; Kim, Moon S; Cho, Hyunjeong; Cho, Byoung-Kwan
2015-11-20
Rapid visible/near-infrared (VNIR) hyperspectral imaging methods, employing both a single waveband algorithm and multi-spectral algorithms, were developed in order to discrimination between sound and discolored lettuce. Reflectance spectra for sound and discolored lettuce surfaces were extracted from hyperspectral reflectance images obtained in the 400-1000 nm wavelength range. The optimal wavebands for discriminating between discolored and sound lettuce surfaces were determined using one-way analysis of variance. Multi-spectral imaging algorithms developed using ratio and subtraction functions resulted in enhanced classification accuracy of above 99.9% for discolored and sound areas on both adaxial and abaxial lettuce surfaces. Ratio imaging (RI) and subtraction imaging (SI) algorithms at wavelengths of 552/701 nm and 557-701 nm, respectively, exhibited better classification performances compared to results obtained for all possible two-waveband combinations. These results suggest that hyperspectral reflectance imaging techniques can potentially be used to discriminate between discolored and sound fresh-cut lettuce.
NASA Technical Reports Server (NTRS)
Liu, Kuojuey Ray
1990-01-01
Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.
Hyperspectral and Hypertemporal Longwave Infrared Data Characterization
NASA Astrophysics Data System (ADS)
Jeganathan, Nirmalan
The Army Research Lab conducted a persistent imaging experiment called the Spectral and Polarimetric Imagery Collection Experiment (SPICE) in 2012 and 2013 which focused on collecting and exploiting long wave infrared hyperspectral and polarimetric imagery. A part of this dataset was made for public release for research and development purposes. This thesis investigated the hyperspectral portion of this released dataset through data characterization and scene characterization of man-made and natural objects. First, the data were contrasted with MODerate resolution atmospheric TRANsmission (MODTRAN) results and found to be comparable. Instrument noise was characterized using an in-scene black panel, and was found to be comparable with the sensor manufacturer's specication. The temporal and spatial variation of certain objects in the scene were characterized. Temporal target detection was conducted on man-made objects in the scene using three target detection algorithms: spectral angle mapper (SAM), spectral matched lter (SMF) and adaptive coherence/cosine estimator (ACE). SMF produced the best results for detecting the targets when the training and testing data originated from different time periods, with a time index percentage result of 52.9%. Unsupervised and supervised classification were conducted using spectral and temporal target signatures. Temporal target signatures produced better visual classification than spectral target signature for unsupervised classification. Supervised classification yielded better results using the spectral target signatures, with a highest weighted accuracy of 99% for 7-class reference image. Four emissivity retrieval algorithms were applied on this dataset. However, the retrieved emissivities from all four methods did not represent true material emissivity and could not be used for analysis. This spectrally and temporally rich dataset enabled to conduct analysis that was not possible with other data collections. Regarding future work, applying noise-reduction techniques before applying temperature-emissivity retrieval algorithms may produce more realistic emissivity values, which could be used for target detection and material identification.
Models of formation and some algorithms of hyperspectral image processing
NASA Astrophysics Data System (ADS)
Achmetov, R. N.; Stratilatov, N. R.; Yudakov, A. A.; Vezenov, V. I.; Eremeev, V. V.
2014-12-01
Algorithms and information technologies for processing Earth hyperspectral imagery are presented. Several new approaches are discussed. Peculiar properties of processing the hyperspectral imagery, such as multifold signal-to-noise reduction, atmospheric distortions, access to spectral characteristics of every image point, and high dimensionality of data, were studied. Different measures of similarity between individual hyperspectral image points and the effect of additive uncorrelated noise on these measures were analyzed. It was shown that these measures are substantially affected by noise, and a new measure free of this disadvantage was proposed. The problem of detecting the observed scene object boundaries, based on comparing the spectral characteristics of image points, is considered. It was shown that contours are processed much better when spectral characteristics are used instead of energy brightness. A statistical approach to the correction of atmospheric distortions, which makes it possible to solve the stated problem based on analysis of a distorted image in contrast to analytical multiparametric models, was proposed. Several algorithms used to integrate spectral zonal images with data from other survey systems, which make it possible to image observed scene objects with a higher quality, are considered. Quality characteristics of hyperspectral data processing were proposed and studied.
Spectral Learning for Supervised Topic Models.
Ren, Yong; Wang, Yining; Zhu, Jun
2018-03-01
Supervised topic models simultaneously model the latent topic structure of large collections of documents and a response variable associated with each document. Existing inference methods are based on variational approximation or Monte Carlo sampling, which often suffers from the local minimum defect. Spectral methods have been applied to learn unsupervised topic models, such as latent Dirichlet allocation (LDA), with provable guarantees. This paper investigates the possibility of applying spectral methods to recover the parameters of supervised LDA (sLDA). We first present a two-stage spectral method, which recovers the parameters of LDA followed by a power update method to recover the regression model parameters. Then, we further present a single-phase spectral algorithm to jointly recover the topic distribution matrix as well as the regression weights. Our spectral algorithms are provably correct and computationally efficient. We prove a sample complexity bound for each algorithm and subsequently derive a sufficient condition for the identifiability of sLDA. Thorough experiments on synthetic and real-world datasets verify the theory and demonstrate the practical effectiveness of the spectral algorithms. In fact, our results on a large-scale review rating dataset demonstrate that our single-phase spectral algorithm alone gets comparable or even better performance than state-of-the-art methods, while previous work on spectral methods has rarely reported such promising performance.
NASA Technical Reports Server (NTRS)
Tomei, B. A.; Smith, L. G.
1986-01-01
Sounding rockets equipped to monitor electron density and its fine structure were launched into the auroral and equatorial ionosphere in 1980 and 1983, respectively. The measurement electronics are based on the Langmuir probe and are described in detail. An approach to the spectral analysis of the density irregularities is addressed and a software algorithm implementing the approach is given. Preliminary results of the analysis are presented.
A comparison of autonomous techniques for multispectral image analysis and classification
NASA Astrophysics Data System (ADS)
Valdiviezo-N., Juan C.; Urcid, Gonzalo; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso
2012-10-01
Multispectral imaging has given place to important applications related to classification and identification of objects from a scene. Because of multispectral instruments can be used to estimate the reflectance of materials in the scene, these techniques constitute fundamental tools for materials analysis and quality control. During the last years, a variety of algorithms has been developed to work with multispectral data, whose main purpose has been to perform the correct classification of the objects in the scene. The present study introduces a brief review of some classical as well as a novel technique that have been used for such purposes. The use of principal component analysis and K-means clustering techniques as important classification algorithms is here discussed. Moreover, a recent method based on the min-W and max-M lattice auto-associative memories, that was proposed for endmember determination in hyperspectral imagery, is introduced as a classification method. Besides a discussion of their mathematical foundation, we emphasize their main characteristics and the results achieved for two exemplar images conformed by objects similar in appearance, but spectrally different. The classification results state that the first components computed from principal component analysis can be used to highlight areas with different spectral characteristics. In addition, the use of lattice auto-associative memories provides good results for materials classification even in the cases where some spectral similarities appears in their spectral responses.
Hyperspectral feature mapping classification based on mathematical morphology
NASA Astrophysics Data System (ADS)
Liu, Chang; Li, Junwei; Wang, Guangping; Wu, Jingli
2016-03-01
This paper proposed a hyperspectral feature mapping classification algorithm based on mathematical morphology. Without the priori information such as spectral library etc., the spectral and spatial information can be used to realize the hyperspectral feature mapping classification. The mathematical morphological erosion and dilation operations are performed respectively to extract endmembers. The spectral feature mapping algorithm is used to carry on hyperspectral image classification. The hyperspectral image collected by AVIRIS is applied to evaluate the proposed algorithm. The proposed algorithm is compared with minimum Euclidean distance mapping algorithm, minimum Mahalanobis distance mapping algorithm, SAM algorithm and binary encoding mapping algorithm. From the results of the experiments, it is illuminated that the proposed algorithm's performance is better than that of the other algorithms under the same condition and has higher classification accuracy.
A semi-supervised classification algorithm using the TAD-derived background as training data
NASA Astrophysics Data System (ADS)
Fan, Lei; Ambeau, Brittany; Messinger, David W.
2013-05-01
In general, spectral image classification algorithms fall into one of two categories: supervised and unsupervised. In unsupervised approaches, the algorithm automatically identifies clusters in the data without a priori information about those clusters (except perhaps the expected number of them). Supervised approaches require an analyst to identify training data to learn the characteristics of the clusters such that they can then classify all other pixels into one of the pre-defined groups. The classification algorithm presented here is a semi-supervised approach based on the Topological Anomaly Detection (TAD) algorithm. The TAD algorithm defines background components based on a mutual k-Nearest Neighbor graph model of the data, along with a spectral connected components analysis. Here, the largest components produced by TAD are used as regions of interest (ROI's),or training data for a supervised classification scheme. By combining those ROI's with a Gaussian Maximum Likelihood (GML) or a Minimum Distance to the Mean (MDM) algorithm, we are able to achieve a semi supervised classification method. We test this classification algorithm against data collected by the HyMAP sensor over the Cooke City, MT area and University of Pavia scene.
Detection of cardiac activity using a 5.8 GHz radio frequency sensor.
Vasu, V; Fox, N; Brabetz, T; Wren, M; Heneghan, C; Sezer, S
2009-01-01
A 5.8-GHz ISM-Band radio-frequency sensor has been developed for non-contact measurement of respiration and heart rate from stationary and semi-stationary subjects at a distance of 0.5 to 1.5 meters. We report on the accuracy of the heart rate measurements obtained using two algorithmic approaches, as compared to a reference heart rate obtained using a pulse oximeter. Simultaneous Photoplethysmograph (PPG) and non-contact sensor recordings were recorded over fifteen minute periods for ten healthy subjects (8M/2F, ages 29.6 + or - 5.6 yrs) One algorithm is based on automated detection of individual peaks associated with each cardiac cycle; a second algorithm extracts a heart rate over a 60-second period using spectral analysis. Peaks were also extracted manually for comparison with the automated method. The peak-detection methods were less accurate than the spectral methods, but suggest the possibility of acquiring beat by beat data; the spectral algorithms measured heart rate to within + or -10% for the ten subjects chosen. Non-contact measurement of heart rate will be useful in chronic disease monitoring for conditions such as heart failure and cardiovascular disease.
A spectrum fractal feature classification algorithm for agriculture crops with hyper spectrum image
NASA Astrophysics Data System (ADS)
Su, Junying
2011-11-01
A fractal dimension feature analysis method in spectrum domain for hyper spectrum image is proposed for agriculture crops classification. Firstly, a fractal dimension calculation algorithm in spectrum domain is presented together with the fast fractal dimension value calculation algorithm using the step measurement method. Secondly, the hyper spectrum image classification algorithm and flowchart is presented based on fractal dimension feature analysis in spectrum domain. Finally, the experiment result of the agricultural crops classification with FCL1 hyper spectrum image set with the proposed method and SAM (spectral angle mapper). The experiment results show it can obtain better classification result than the traditional SAM feature analysis which can fulfill use the spectrum information of hyper spectrum image to realize precision agricultural crops classification.
Spectral unmixing of multi-color tissue specific in vivo fluorescence in mice
NASA Astrophysics Data System (ADS)
Zacharakis, Giannis; Favicchio, Rosy; Garofalakis, Anikitos; Psycharakis, Stylianos; Mamalaki, Clio; Ripoll, Jorge
2007-07-01
Fluorescence Molecular Tomography (FMT) has emerged as a powerful tool for monitoring biological functions in vivo in small animals. It provides the means to determine volumetric images of fluorescent protein concentration by applying the principles of diffuse optical tomography. Using different probes tagged to different proteins or cells, different biological functions and pathways can be simultaneously imaged in the same subject. In this work we present a spectral unmixing algorithm capable of separating signal from different probes when combined with the tomographic imaging modality. We show results of two-color imaging when the algorithm is applied to separate fluorescence activity originating from phantoms containing two different fluorophores, namely CFSE and SNARF, with well separated emission spectra, as well as Dsred- and GFP-fused cells in F5-b10 transgenic mice in vivo. The same algorithm can furthermore be applied to tissue-specific spectroscopy data. Spectral analysis of a variety of organs from control, DsRed and GFP F5/B10 transgenic mice showed that fluorophore detection by optical systems is highly tissue-dependent. Spectral data collected from different organs can provide useful insight into experimental parameter optimisation (choice of filters, fluorophores, excitation wavelengths) and spectral unmixing can be applied to measure the tissue-dependency, thereby taking into account localized fluorophore efficiency. Summed up, tissue spectral unmixing can be used as criteria in choosing the most appropriate tissue targets as well as fluorescent markers for specific applications.
Asian dust aerosol: Optical effect on satellite ocean color signal and a scheme of its correction
NASA Astrophysics Data System (ADS)
Fukushima, H.; Toratani, M.
1997-07-01
The paper first exhibits the influence of the Asian dust aerosol (KOSA) on a coastal zone color scanner (CZCS) image which records erroneously low or negative satellite-derived water-leaving radiance especially in a shorter wavelength region. This suggests the presence of spectrally dependent absorption which was disregarded in the past atmospheric correction algorithms. On the basis of the analysis of the scene, a semiempirical optical model of the Asian dust aerosol that relates aerosol single scattering albedo (ωA) to the spectral ratio of aerosol optical thickness between 550 nm and 670 nm is developed. Then, as a modification to a standard CZCS atmospheric correction algorithm (NASA standard algorithm), a scheme which estimates pixel-wise aerosol optical thickness, and in turn ωA, is proposed. The assumption of constant normalized water-leaving radiance at 550 nm is adopted together with a model of aerosol scattering phase function. The scheme is combined to the standard algorithm, performing atmospheric correction just the same as the standard version with a fixed Angstrom coefficient except in the case where the presence of Asian dust aerosol is detected by the lowered satellite-derived Angstrom exponent. Some of the model parameter values are determined so that the scheme does not produce any spatial discontinuity with the standard scheme. The algorithm was tested against the Japanese Asian dust CZCS scene with parameter values of the spectral dependency of ωA, first statistically determined and second optimized for selected pixels. Analysis suggests that the parameter values depend on the assumed Angstrom coefficient for standard algorithm, at the same time defining the spatial extent of the area to apply the Asian dust scheme. The algorithm was also tested for a Saharan dust scene, showing the relevance of the scheme but with different parameter setting. Finally, the algorithm was applied to a data set of 25 CZCS scenes to produce a monthly composite of pigment concentration for April 1981. Through these analyses, the modified algorithm is considered robust in the sense that it operates most compatibly with the standard algorithm yet performs adaptively in response to the magnitude of the dust effect.
NASA Astrophysics Data System (ADS)
Yarmohammadi, M.; Javadi, S.; Babolian, E.
2018-04-01
In this study a new spectral iterative method (SIM) based on fractional interpolation is presented for solving nonlinear fractional differential equations (FDEs) involving Caputo derivative. This method is equipped with a pre-algorithm to find the singularity index of solution of the problem. This pre-algorithm gives us a real parameter as the index of the fractional interpolation basis, for which the SIM achieves the highest order of convergence. In comparison with some recent results about the error estimates for fractional approximations, a more accurate convergence rate has been attained. We have also proposed the order of convergence for fractional interpolation error under the L2-norm. Finally, general error analysis of SIM has been considered. The numerical results clearly demonstrate the capability of the proposed method.
Real-time spectral analysis of HRV signals: an interactive and user-friendly PC system.
Basano, L; Canepa, F; Ottonello, P
1998-01-01
We present a real-time system, built around a PC and a low-cost data acquisition board, for the spectral analysis of the heart rate variability signal. The Windows-like operating environment on which it is based makes the computer program very user-friendly even for non-specialized personnel. The Power Spectral Density is computed through the use of a hybrid method, in which a classical FFT analysis follows an autoregressive finite-extension of data; the stationarity of the sequence is continuously checked. The use of this algorithm gives a high degree of robustness of the spectral estimation. Moreover, always in real time, the FFT of every data block is computed and displayed in order to corroborate the results as well as to allow the user to interactively choose a proper AR model order.
Spectral analysis of stellar light curves by means of neural networks
NASA Astrophysics Data System (ADS)
Tagliaferri, R.; Ciaramella, A.; Milano, L.; Barone, F.; Longo, G.
1999-06-01
Periodicity analysis of unevenly collected data is a relevant issue in several scientific fields. In astrophysics, for example, we have to find the fundamental period of light or radial velocity curves which are unevenly sampled observations of stars. Classical spectral analysis methods are unsatisfactory to solve the problem. In this paper we present a neural network based estimator system which performs well the frequency extraction in unevenly sampled signals. It uses an unsupervised Hebbian nonlinear neural algorithm to extract, from the interpolated signal, the principal components which, in turn, are used by the MUSIC frequency estimator algorithm to extract the frequencies. The neural network is tolerant to noise and works well also with few points in the sequence. We benchmark the system on synthetic and real signals with the Periodogram and with the Cramer-Rao lower bound. This work was been partially supported by IIASS, by MURST 40\\% and by the Italian Space Agency.
Extension of least squares spectral resolution algorithm to high-resolution lipidomics data.
Zeng, Ying-Xu; Mjøs, Svein Are; David, Fabrice P A; Schmid, Adrien W
2016-03-31
Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Joseph, R.; Courbin, F.; Starck, J.-L.
2016-05-01
We introduce a new algorithm for colour separation and deblending of multi-band astronomical images called MuSCADeT which is based on Morpho-spectral Component Analysis of multi-band images. The MuSCADeT algorithm takes advantage of the sparsity of astronomical objects in morphological dictionaries such as wavelets and their differences in spectral energy distribution (SED) across multi-band observations. This allows us to devise a model independent and automated approach to separate objects with different colours. We show with simulations that we are able to separate highly blended objects and that our algorithm is robust against SED variations of objects across the field of view. To confront our algorithm with real data, we use HST images of the strong lensing galaxy cluster MACS J1149+2223 and we show that MuSCADeT performs better than traditional profile-fitting techniques in deblending the foreground lensing galaxies from background lensed galaxies. Although the main driver for our work is the deblending of strong gravitational lenses, our method is fit to be used for any purpose related to deblending of objects in astronomical images. An example of such an application is the separation of the red and blue stellar populations of a spiral galaxy in the galaxy cluster Abell 2744. We provide a python package along with all simulations and routines used in this paper to contribute to reproducible research efforts. Codes can be found at http://lastro.epfl.ch/page-126973.html
Spectral Diffusion: An Algorithm for Robust Material Decomposition of Spectral CT Data
Clark, Darin P.; Badea, Cristian T.
2014-01-01
Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piece-wise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg/mL), gold (0.9 mg/mL), and gadolinium (2.9 mg/mL) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen. PMID:25296173
Spectral diffusion: an algorithm for robust material decomposition of spectral CT data.
Clark, Darin P; Badea, Cristian T
2014-11-07
Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piecewise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg mL(-1)), gold (0.9 mg mL(-1)), and gadolinium (2.9 mg mL(-1)) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen.
NASA Technical Reports Server (NTRS)
An, S. H.; Yao, K.
1986-01-01
Lattice algorithm has been employed in numerous adaptive filtering applications such as speech analysis/synthesis, noise canceling, spectral analysis, and channel equalization. In this paper the application to adaptive-array processing is discussed. The advantages are fast convergence rate as well as computational accuracy independent of the noise and interference conditions. The results produced by this technique are compared to those obtained by the direct matrix inverse method.
Dharmaprani, Dhani; Nguyen, Hoang K; Lewis, Trent W; DeLosAngeles, Dylan; Willoughby, John O; Pope, Kenneth J
2016-08-01
Independent Component Analysis (ICA) is a powerful statistical tool capable of separating multivariate scalp electrical signals into their additive independent or source components, specifically EEG or electroencephalogram and artifacts. Although ICA is a widely accepted EEG signal processing technique, classification of the recovered independent components (ICs) is still flawed, as current practice still requires subjective human decisions. Here we build on the results from Fitzgibbon et al. [1] to compare three measures and three ICA algorithms. Using EEG data acquired during neuromuscular paralysis, we tested the ability of the measures (spectral slope, peripherality and spatial smoothness) and algorithms (FastICA, Infomax and JADE) to identify components containing EMG. Spatial smoothness showed differentiation between paralysis and pre-paralysis ICs comparable to spectral slope, whereas peripherality showed less differentiation. A combination of the measures showed better differentiation than any measure alone. Furthermore, FastICA provided the best discrimination between muscle-free and muscle-contaminated recordings in the shortest time, suggesting it may be the most suited to EEG applications of the considered algorithms. Spatial smoothness results suggest that a significant number of ICs are mixed, i.e. contain signals from more than one biological source, and so the development of an ICA algorithm that is optimised to produce ICs that are easily classifiable is warranted.
A method of minimum volume simplex analysis constrained unmixing for hyperspectral image
NASA Astrophysics Data System (ADS)
Zou, Jinlin; Lan, Jinhui; Zeng, Yiliang; Wu, Hongtao
2017-07-01
The signal recorded by a low resolution hyperspectral remote sensor from a given pixel, letting alone the effects of the complex terrain, is a mixture of substances. To improve the accuracy of classification and sub-pixel object detection, hyperspectral unmixing(HU) is a frontier-line in remote sensing area. Unmixing algorithm based on geometric has become popular since the hyperspectral image possesses abundant spectral information and the mixed model is easy to understand. However, most of the algorithms are based on pure pixel assumption, and since the non-linear mixed model is complex, it is hard to obtain the optimal endmembers especially under a highly mixed spectral data. To provide a simple but accurate method, we propose a minimum volume simplex analysis constrained (MVSAC) unmixing algorithm. The proposed approach combines the algebraic constraints that are inherent to the convex minimum volume with abundance soft constraint. While considering abundance fraction, we can obtain the pure endmember set and abundance fraction correspondingly, and the final unmixing result is closer to reality and has better accuracy. We illustrate the performance of the proposed algorithm in unmixing simulated data and real hyperspectral data, and the result indicates that the proposed method can obtain the distinct signatures correctly without redundant endmember and yields much better performance than the pure pixel based algorithm.
Multi scales based sparse matrix spectral clustering image segmentation
NASA Astrophysics Data System (ADS)
Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin
2018-04-01
In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.
The application of hypserspectral imaging analysis to fresh food safety inspection
USDA-ARS?s Scientific Manuscript database
Line-scan hyperspectral images of fresh matured tomatoes were collected for image analysis. Algorithms were developed, based on spectral analysis, to detect defect of cracks on fresh produce. Four wavebands of 569 nm, 645 nm, 702 nm and 887 nm were selected from spectra analysis to use the relative...
Liu, Xiaona; Zhang, Qiao; Wu, Zhisheng; Shi, Xinyuan; Zhao, Na; Qiao, Yanjiang
2015-01-01
Laser-induced breakdown spectroscopy (LIBS) was applied to perform a rapid elemental analysis and provenance study of Blumea balsamifera DC. Principal component analysis (PCA) and partial least squares discriminant analysis (PLS-DA) were implemented to exploit the multivariate nature of the LIBS data. Scores and loadings of computed principal components visually illustrated the differing spectral data. The PLS-DA algorithm showed good classification performance. The PLS-DA model using complete spectra as input variables had similar discrimination performance to using selected spectral lines as input variables. The down-selection of spectral lines was specifically focused on the major elements of B. balsamifera samples. Results indicated that LIBS could be used to rapidly analyze elements and to perform provenance study of B. balsamifera. PMID:25558999
Clustering algorithm for determining community structure in large networks
NASA Astrophysics Data System (ADS)
Pujol, Josep M.; Béjar, Javier; Delgado, Jordi
2006-07-01
We propose an algorithm to find the community structure in complex networks based on the combination of spectral analysis and modularity optimization. The clustering produced by our algorithm is as accurate as the best algorithms on the literature of modularity optimization; however, the main asset of the algorithm is its efficiency. The best match for our algorithm is Newman’s fast algorithm, which is the reference algorithm for clustering in large networks due to its efficiency. When both algorithms are compared, our algorithm outperforms the fast algorithm both in efficiency and accuracy of the clustering, in terms of modularity. Thus, the results suggest that the proposed algorithm is a good choice to analyze the community structure of medium and large networks in the range of tens and hundreds of thousand vertices.
Yang, Pao-Keng
2012-05-01
We present a noniterative algorithm to reliably reconstruct the spectral reflectance from discrete reflectance values measured by using multicolor light emitting diodes (LEDs) as probing light sources. The proposed algorithm estimates the spectral reflectance by a linear combination of product functions of the detector's responsivity function and the LEDs' line-shape functions. After introducing suitable correction, the resulting spectral reflectance was found to be free from the spectral-broadening effect due to the finite bandwidth of LED. We analyzed the data for a real sample and found that spectral reflectance with enhanced resolution gives a more accurate prediction in the color measurement.
NASA Astrophysics Data System (ADS)
Yang, Pao-Keng
2012-05-01
We present a noniterative algorithm to reliably reconstruct the spectral reflectance from discrete reflectance values measured by using multicolor light emitting diodes (LEDs) as probing light sources. The proposed algorithm estimates the spectral reflectance by a linear combination of product functions of the detector's responsivity function and the LEDs' line-shape functions. After introducing suitable correction, the resulting spectral reflectance was found to be free from the spectral-broadening effect due to the finite bandwidth of LED. We analyzed the data for a real sample and found that spectral reflectance with enhanced resolution gives a more accurate prediction in the color measurement.
A seismic coherency method using spectral amplitudes
NASA Astrophysics Data System (ADS)
Sui, Jing-Kun; Zheng, Xiao-Dong; Li, Yan-Dong
2015-09-01
Seismic coherence is used to detect discontinuities in underground media. However, strata with steeply dipping structures often produce false low coherence estimates and thus incorrect discontinuity characterization results. It is important to eliminate or reduce the effect of dipping on coherence estimates. To solve this problem, time-domain dip scanning is typically used to improve estimation of coherence in areas with steeply dipping structures. However, the accuracy of the time-domain estimation of dip is limited by the sampling interval. In contrast, the spectrum amplitude is not affected by the time delays in adjacent seismic traces caused by dipping structures. We propose a coherency algorithm that uses the spectral amplitudes of seismic traces within a predefined analysis window to construct the covariance matrix. The coherency estimates with the proposed algorithm is defined as the ratio between the dominant eigenvalue and the sum of all eigenvalues of the constructed covariance matrix. Thus, we eliminate the effect of dipping structures on coherency estimates. In addition, because different frequency bands of spectral amplitudes are used to estimate coherency, the proposed algorithm has multiscale features. Low frequencies are effective for characterizing large-scale faults, whereas high frequencies are better in characterizing small-scale faults. Application to synthetic and real seismic data show that the proposed algorithm can eliminate the effect of dip and produce better coherence estimates than conventional coherency algorithms in areas with steeply dipping structures.
Algorithm based on the short-term Rényi entropy and IF estimation for noisy EEG signals analysis.
Lerga, Jonatan; Saulig, Nicoletta; Mozetič, Vladimir
2017-01-01
Stochastic electroencephalogram (EEG) signals are known to be nonstationary and often multicomponential. Detecting and extracting their components may help clinicians to localize brain neurological dysfunctionalities for patients with motor control disorders due to the fact that movement-related cortical activities are reflected in spectral EEG changes. A new algorithm for EEG signal components detection from its time-frequency distribution (TFD) has been proposed in this paper. The algorithm utilizes the modification of the Rényi entropy-based technique for number of components estimation, called short-term Rényi entropy (STRE), and upgraded by an iterative algorithm which was shown to enhance existing approaches. Combined with instantaneous frequency (IF) estimation, the proposed method was applied to EEG signal analysis both in noise-free and noisy environments for limb movements EEG signals, and was shown to be an efficient technique providing spectral description of brain activities at each electrode location up to moderate additive noise levels. Furthermore, the obtained information concerning the number of EEG signal components and their IFs show potentials to enhance diagnostics and treatment of neurological disorders for patients with motor control illnesses. Copyright © 2016 Elsevier Ltd. All rights reserved.
A hyperspectral image projector for hyperspectral imagers
NASA Astrophysics Data System (ADS)
Rice, Joseph P.; Brown, Steven W.; Neira, Jorge E.; Bousquet, Robert R.
2007-04-01
We have developed and demonstrated a Hyperspectral Image Projector (HIP) intended for system-level validation testing of hyperspectral imagers, including the instrument and any associated spectral unmixing algorithms. HIP, based on the same digital micromirror arrays used in commercial digital light processing (DLP*) displays, is capable of projecting any combination of many different arbitrarily programmable basis spectra into each image pixel at up to video frame rates. We use a scheme whereby one micromirror array is used to produce light having the spectra of endmembers (i.e. vegetation, water, minerals, etc.), and a second micromirror array, optically in series with the first, projects any combination of these arbitrarily-programmable spectra into the pixels of a 1024 x 768 element spatial image, thereby producing temporally-integrated images having spectrally mixed pixels. HIP goes beyond conventional DLP projectors in that each spatial pixel can have an arbitrary spectrum, not just arbitrary color. As such, the resulting spectral and spatial content of the projected image can simulate realistic scenes that a hyperspectral imager will measure during its use. Also, the spectral radiance of the projected scenes can be measured with a calibrated spectroradiometer, such that the spectral radiance projected into each pixel of the hyperspectral imager can be accurately known. Use of such projected scenes in a controlled laboratory setting would alleviate expensive field testing of instruments, allow better separation of environmental effects from instrument effects, and enable system-level performance testing and validation of hyperspectral imagers as used with analysis algorithms. For example, known mixtures of relevant endmember spectra could be projected into arbitrary spatial pixels in a hyperspectral imager, enabling tests of how well a full system, consisting of the instrument + calibration + analysis algorithm, performs in unmixing (i.e. de-convolving) the spectra in all pixels. We discuss here the performance of a visible prototype HIP. The technology is readily extendable to the ultraviolet and infrared spectral ranges, and the scenes can be static or dynamic.
Reconstructing Spectral Scenes Using Statistical Estimation to Enhance Space Situational Awareness
2006-12-01
simultane- ously spatially and spectrally deblur the images collected from ASIS. The algorithms are based on proven estimation theories and do not...collected with any system using a filtering technology known as Electronic Tunable Filters (ETFs). Previous methods to deblur spectral images collected...spectrally deblurring then the previously investigated methods. This algorithm expands on a method used for increasing the spectral resolution in gamma-ray
Evaluating an image-fusion algorithm with synthetic-image-generation tools
NASA Astrophysics Data System (ADS)
Gross, Harry N.; Schott, John R.
1996-06-01
An algorithm that combines spectral mixing and nonlinear optimization is used to fuse multiresolution images. Image fusion merges images of different spatial and spectral resolutions to create a high spatial resolution multispectral combination. High spectral resolution allows identification of materials in the scene, while high spatial resolution locates those materials. In this algorithm, conventional spectral mixing estimates the percentage of each material (called endmembers) within each low resolution pixel. Three spectral mixing models are compared; unconstrained, partially constrained, and fully constrained. In the partially constrained application, the endmember fractions are required to sum to one. In the fully constrained application, all fractions are additionally required to lie between zero and one. While negative fractions seem inappropriate, they can arise from random spectral realizations of the materials. In the second part of the algorithm, the low resolution fractions are used as inputs to a constrained nonlinear optimization that calculates the endmember fractions for the high resolution pixels. The constraints mirror the low resolution constraints and maintain consistency with the low resolution fraction results. The algorithm can use one or more higher resolution sharpening images to locate the endmembers to high spatial accuracy. The algorithm was evaluated with synthetic image generation (SIG) tools. A SIG developed image can be used to control the various error sources that are likely to impair the algorithm performance. These error sources include atmospheric effects, mismodeled spectral endmembers, and variability in topography and illumination. By controlling the introduction of these errors, the robustness of the algorithm can be studied and improved upon. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution, fusing them with high resolution sharpening images will produce a higher spatial resolution land cover or material map.
NASA Astrophysics Data System (ADS)
Abramovich, N. S.; Kovalev, A. A.; Plyuta, V. Y.
1986-02-01
A computer algorithm has been developed to classify the spectral bands of natural scenes on Earth according to their optical characteristics. The algorithm is written in FORTRAN-IV and can be used in spectral data processing programs requiring small data loads. The spectral classifications of some different types of green vegetable canopies are given in order to illustrate the effectiveness of the algorithm.
Developing an Automated Science Analysis System for Mars Surface Exploration for MSL and Beyond
NASA Technical Reports Server (NTRS)
Gulick, V. C.; Hart, S. D.; Shi, X.; Siegel, V. L.
2004-01-01
We are developing an automated science analysis system that could be utilized by robotic or human explorers on Mars (or even in remote locations on Earth) to improve the quality and quantity of science data returned. Three components of this system (our rock, layer, and horizon detectors) [1] have been incorporated into the JPL CLARITY system for possible use by MSL and future Mars robotic missions. Two other components include a multi-spectral image compression (SPEC) algorithm for pancam-type images with multiple filters and image fusion algorithms that identify the in focus regions of individual images in an image focal series [2]. Recently, we have been working to combine image and spectral data, and other knowledge to identify both rocks and minerals. Here we present our progress on developing an igneous rock detection system.
Red Blood Cell Count Automation Using Microscopic Hyperspectral Imaging Technology.
Li, Qingli; Zhou, Mei; Liu, Hongying; Wang, Yiting; Guo, Fangmin
2015-12-01
Red blood cell counts have been proven to be one of the most frequently performed blood tests and are valuable for early diagnosis of some diseases. This paper describes an automated red blood cell counting method based on microscopic hyperspectral imaging technology. Unlike the light microscopy-based red blood count methods, a combined spatial and spectral algorithm is proposed to identify red blood cells by integrating active contour models and automated two-dimensional k-means with spectral angle mapper algorithm. Experimental results show that the proposed algorithm has better performance than spatial based algorithm because the new algorithm can jointly use the spatial and spectral information of blood cells.
Data compressive paradigm for multispectral sensing using tunable DWELL mid-infrared detectors.
Jang, Woo-Yong; Hayat, Majeed M; Godoy, Sebastián E; Bender, Steven C; Zarkesh-Ha, Payman; Krishna, Sanjay
2011-09-26
While quantum dots-in-a-well (DWELL) infrared photodetectors have the feature that their spectral responses can be shifted continuously by varying the applied bias, the width of the spectral response at any applied bias is not sufficiently narrow for use in multispectral sensing without the aid of spectral filters. To achieve higher spectral resolutions without using physical spectral filters, algorithms have been developed for post-processing the DWELL's bias-dependent photocurrents resulting from probing an object of interest repeatedly over a wide range of applied biases. At the heart of these algorithms is the ability to approximate an arbitrary spectral filter, which we desire the DWELL-algorithm combination to mimic, by forming a weighted superposition of the DWELL's non-orthogonal spectral responses over a range of applied biases. However, these algorithms assume availability of abundant DWELL data over a large number of applied biases (>30), leading to large overall acquisition times in proportion with the number of biases. This paper reports a new multispectral sensing algorithm to substantially compress the number of necessary bias values subject to a prescribed performance level across multiple sensing applications. The algorithm identifies a minimal set of biases to be used in sensing only the relevant spectral information for remote-sensing applications of interest. Experimental results on target spectrometry and classification demonstrate a reduction in the number of required biases by a factor of 7 (e.g., from 30 to 4). The tradeoff between performance and bias compression is thoroughly investigated. © 2011 Optical Society of America
Optimisation and evaluation of hyperspectral imaging system using machine learning algorithm
NASA Astrophysics Data System (ADS)
Suthar, Gajendra; Huang, Jung Y.; Chidangil, Santhosh
2017-10-01
Hyperspectral imaging (HSI), also called imaging spectrometer, originated from remote sensing. Hyperspectral imaging is an emerging imaging modality for medical applications, especially in disease diagnosis and image-guided surgery. HSI acquires a three-dimensional dataset called hypercube, with two spatial dimensions and one spectral dimension. Spatially resolved spectral imaging obtained by HSI provides diagnostic information about the objects physiology, morphology, and composition. The present work involves testing and evaluating the performance of the hyperspectral imaging system. The methodology involved manually taking reflectance of the object in many images or scan of the object. The object used for the evaluation of the system was cabbage and tomato. The data is further converted to the required format and the analysis is done using machine learning algorithm. The machine learning algorithms applied were able to distinguish between the object present in the hypercube obtain by the scan. It was concluded from the results that system was working as expected. This was observed by the different spectra obtained by using the machine-learning algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anthony, Stephen
The Sandia hyperspectral upper-bound spectrum algorithm (hyper-UBS) is a cosmic ray despiking algorithm for hyperspectral data sets. When naturally-occurring, high-energy (gigaelectronvolt) cosmic rays impact the earth’s atmosphere, they create an avalanche of secondary particles which will register as a large, positive spike on any spectroscopic detector they hit. Cosmic ray spikes are therefore an unavoidable spectroscopic contaminant which can interfere with subsequent analysis. A variety of cosmic ray despiking algorithms already exist and can potentially be applied to hyperspectral data matrices, most notably the upper-bound spectrum data matrices (UBS-DM) algorithm by Dongmao Zhang and Dor Ben-Amotz which served as themore » basis for the hyper-UBS algorithm. However, the existing algorithms either cannot be applied to hyperspectral data, require information that is not always available, introduce undesired spectral bias, or have otherwise limited effectiveness for some experimentally relevant conditions. Hyper-UBS is more effective at removing a wider variety of cosmic ray spikes from hyperspectral data without introducing undesired spectral bias. In addition to the core algorithm the Sandia hyper-UBS software package includes additional source code useful in evaluating the effectiveness of the hyper-UBS algorithm. The accompanying source code includes code to generate simulated hyperspectral data contaminated by cosmic ray spikes, several existing despiking algorithms, and code to evaluate the performance of the despiking algorithms on simulated data.« less
NASA Astrophysics Data System (ADS)
Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael; Berk, Alexander; Anderson, Gail; Gardner, James; Felde, Gerald
2005-10-01
Atmospheric Correction Algorithms (ACAs) are used in applications of remotely sensed Hyperspectral and Multispectral Imagery (HSI/MSI) to correct for atmospheric effects on measurements acquired by air and space-borne systems. The Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) algorithm is a forward-model based ACA created for HSI and MSI instruments which operate in the visible through shortwave infrared (Vis-SWIR) spectral regime. Designed as a general-purpose, physics-based code for inverting at-sensor radiance measurements into surface reflectance, FLAASH provides a collection of spectral analysis and atmospheric retrieval methods including: a per-pixel vertical water vapor column estimate, determination of aerosol optical depth, estimation of scattering for compensation of adjacency effects, detection/characterization of clouds, and smoothing of spectral structure resulting from an imperfect atmospheric correction. To further improve the accuracy of the atmospheric correction process, FLAASH will also detect and compensate for sensor-introduced artifacts such as optical smile and wavelength mis-calibration. FLAASH relies on the MODTRANTM radiative transfer (RT) code as the physical basis behind its mathematical formulation, and has been developed in parallel with upgrades to MODTRAN in order to take advantage of the latest improvements in speed and accuracy. For example, the rapid, high fidelity multiple scattering (MS) option available in MODTRAN4 can greatly improve the accuracy of atmospheric retrievals over the 2-stream approximation. In this paper, advanced features available in FLAASH are described, including the principles and methods used to derive atmospheric parameters from HSI and MSI data. Results are presented from processing of Hyperion, AVIRIS, and LANDSAT data.
Algorithms and Results of Eye Tissues Differentiation Based on RF Ultrasound
Jurkonis, R.; Janušauskas, A.; Marozas, V.; Jegelevičius, D.; Daukantas, S.; Patašius, M.; Paunksnis, A.; Lukoševičius, A.
2012-01-01
Algorithms and software were developed for analysis of B-scan ultrasonic signals acquired from commercial diagnostic ultrasound system. The algorithms process raw ultrasonic signals in backscattered spectrum domain, which is obtained using two time-frequency methods: short-time Fourier and Hilbert-Huang transformations. The signals from selected regions of eye tissues are characterized by parameters: B-scan envelope amplitude, approximated spectral slope, approximated spectral intercept, mean instantaneous frequency, mean instantaneous bandwidth, and parameters of Nakagami distribution characterizing Hilbert-Huang transformation output. The backscattered ultrasound signal parameters characterizing intraocular and orbit tissues were processed by decision tree data mining algorithm. The pilot trial proved that applied methods are able to correctly classify signals from corpus vitreum blood, extraocular muscle, and orbit tissues. In 26 cases of ocular tissues classification, one error occurred, when tissues were classified into classes of corpus vitreum blood, extraocular muscle, and orbit tissue. In this pilot classification parameters of spectral intercept and Nakagami parameter for instantaneous frequencies distribution of the 1st intrinsic mode function were found specific for corpus vitreum blood, orbit and extraocular muscle tissues. We conclude that ultrasound data should be further collected in clinical database to establish background for decision support system for ocular tissue noninvasive differentiation. PMID:22654643
NASA Technical Reports Server (NTRS)
Huck, F. O.; Davis, R. E.; Fales, C. L.; Aherron, R. M.
1982-01-01
A computational model of the deterministic and stochastic processes involved in remote sensing is used to study spectral feature identification techniques for real-time onboard processing of data acquired with advanced earth-resources sensors. Preliminary results indicate that: Narrow spectral responses are advantageous; signal normalization improves mean-square distance (MSD) classification accuracy but tends to degrade maximum-likelihood (MLH) classification accuracy; and MSD classification of normalized signals performs better than the computationally more complex MLH classification when imaging conditions change appreciably from those conditions during which reference data were acquired. The results also indicate that autonomous categorization of TM signals into vegetation, bare land, water, snow and clouds can be accomplished with adequate reliability for many applications over a reasonably wide range of imaging conditions. However, further analysis is required to develop computationally efficient boundary approximation algorithms for such categorization.
NASA Technical Reports Server (NTRS)
Harwit, M.; Swift, R.; Wattson, R.; Decker, J.; Paganetti, R.
1976-01-01
A spectrometric imager and a thermal imager, which achieve multiplexing by the use of binary optical encoding masks, were developed. The masks are based on orthogonal, pseudorandom digital codes derived from Hadamard matrices. Spatial and/or spectral data is obtained in the form of a Hadamard transform of the spatial and/or spectral scene; computer algorithms are then used to decode the data and reconstruct images of the original scene. The hardware, algorithms and processing/display facility are described. A number of spatial and spatial/spectral images are presented. The achievement of a signal-to-noise improvement due to the signal multiplexing was also demonstrated. An analysis of the results indicates both the situations for which the multiplex advantage may be gained, and the limitations of the technique. A number of potential applications of the spectrometric imager are discussed.
Multispectral autofluorescence diagnosis of non-melanoma cutaneous tumors
NASA Astrophysics Data System (ADS)
Borisova, Ekaterina; Dogandjiiska, Daniela; Bliznakova, Irina; Avramov, Latchezar; Pavlova, Elmira; Troyanova, Petranka
2009-07-01
Fluorescent analysis of basal cell carcinoma (BCC), squamous cell carcinoma (SCC), keratoacanthoma and benign cutaneous lesions is carried out under initial phase of clinical trial in the National Oncological Center - Sofia. Excitation sources with maximum of emission at 365, 380, 405, 450 and 630 nm are applied for better differentiation between nonmelanoma malignant cutaneous lesions fluorescence and spectral discrimination from the benign pathologies. Major spectral features are addressed and diagnostic discrimination algorithms based on lesions' emission properties are proposed. The diagnostic algorithms and evaluation procedures found will be applied for development of an optical biopsy clinical system for skin cancer detection in the frames of National Oncological Center and other university hospital dermatological departments in our country.
An improved feature extraction algorithm based on KAZE for multi-spectral image
NASA Astrophysics Data System (ADS)
Yang, Jianping; Li, Jun
2018-02-01
Multi-spectral image contains abundant spectral information, which is widely used in all fields like resource exploration, meteorological observation and modern military. Image preprocessing, such as image feature extraction and matching, is indispensable while dealing with multi-spectral remote sensing image. Although the feature matching algorithm based on linear scale such as SIFT and SURF performs strong on robustness, the local accuracy cannot be guaranteed. Therefore, this paper proposes an improved KAZE algorithm, which is based on nonlinear scale, to raise the number of feature and to enhance the matching rate by using the adjusted-cosine vector. The experiment result shows that the number of feature and the matching rate of the improved KAZE are remarkably than the original KAZE algorithm.
Khouj, Yasser; Dawson, Jeremy; Coad, James; Vona-Davis, Linda
2018-01-01
Hyperspectral imaging (HSI) is a non-invasive optical imaging modality that shows the potential to aid pathologists in breast cancer diagnoses cases. In this study, breast cancer tissues from different patients were imaged by a hyperspectral system to detect spectral differences between normal and breast cancer tissues. Tissue samples mounted on slides were identified from 10 different patients. Samples from each patient included both normal and ductal carcinoma tissue, both stained with hematoxylin and eosin stain and unstained. Slides were imaged using a snapshot HSI system, and the spectral reflectance differences were evaluated. Analysis of the spectral reflectance values indicated that wavelengths near 550 nm showed the best differentiation between tissue types. This information was used to train image processing algorithms using supervised and unsupervised data. The K-means method was applied to the hyperspectral data cubes, and successfully detected spectral tissue differences with sensitivity of 85.45%, and specificity of 94.64% with true negative rate of 95.8%, and false positive rate of 4.2%. These results were verified by ground-truth marking of the tissue samples by a pathologist. In the hyperspectral image analysis, the image processing algorithm, K-means, shows the greatest potential for building a semi-automated system that could identify and sort between normal and ductal carcinoma in situ tissues.
NASA Astrophysics Data System (ADS)
Ivanov, Victor; Osetrov, Evgenii
2018-02-01
In this paper, we investigate the possibility of applying various approaches to solving the problem of medium-term forecasting of daily passenger traffic volumes in the Moscow metro (MM): 1) on the basis of artificial neural networks (ANN); 2) using the singular-spectral analysis implemented in the package "Caterpillar"-SSA; 3) sharing the ANN and the "Caterpillar"-SSA approach. We demonstrate that the developed methods and algorithms allow us to conduct medium-term forecasting of passenger traffic in the MM with reasonable accuracy.
Joint demosaicking and zooming using moderate spectral correlation and consistent edge map
NASA Astrophysics Data System (ADS)
Zhou, Dengwen; Dong, Weiming; Chen, Wengang
2014-07-01
The recently published joint demosaicking and zooming algorithms for single-sensor digital cameras all overfit the popular Kodak test images, which have been found to have higher spectral correlation than typical color images. Their performance perhaps significantly degrades on other datasets, such as the McMaster test images, which have weak spectral correlation. A new joint demosaicking and zooming algorithm is proposed for the Bayer color filter array (CFA) pattern, in which the edge direction information (edge map) extracted from the raw CFA data is consistently used in demosaicking and zooming. It also moderately utilizes the spectral correlation between color planes. The experimental results confirm that the proposed algorithm produces an excellent performance on both the Kodak and McMaster datasets in terms of both subjective and objective measures. Our algorithm also has high computational efficiency. It provides a better tradeoff among adaptability, performance, and computational cost compared to the existing algorithms.
NASA Astrophysics Data System (ADS)
Zhao, Runchen; Ientilucci, Emmett J.
2017-05-01
Hyperspectral remote sensing systems provide spectral data composed of hundreds of narrow spectral bands. Spectral remote sensing systems can be used to identify targets, for example, without physical interaction. Often it is of interested to characterize the spectral variability of targets or objects. The purpose of this paper is to identify and characterize the LWIR spectral variability of targets based on an improved earth observing statistical performance model, known as the Forecasting and Analysis of Spectroradiometric System Performance (FASSP) model. FASSP contains three basic modules including a scene model, sensor model and a processing model. Instead of using mean surface reflectance only as input to the model, FASSP transfers user defined statistical characteristics of a scene through the image chain (i.e., from source to sensor). The radiative transfer model, MODTRAN, is used to simulate the radiative transfer based on user defined atmospheric parameters. To retrieve class emissivity and temperature statistics, or temperature / emissivity separation (TES), a LWIR atmospheric compensation method is necessary. The FASSP model has a method to transform statistics in the visible (ie., ELM) but currently does not have LWIR TES algorithm in place. This paper addresses the implementation of such a TES algorithm and its associated transformation of statistics.
[Road Extraction in Remote Sensing Images Based on Spectral and Edge Analysis].
Zhao, Wen-zhi; Luo, Li-qun; Guo, Zhou; Yue, Jun; Yu, Xue-ying; Liu, Hui; Wei, Jing
2015-10-01
Roads are typically man-made objects in urban areas. Road extraction from high-resolution images has important applications for urban planning and transportation development. However, due to the confusion of spectral characteristic, it is difficult to distinguish roads from other objects by merely using traditional classification methods that mainly depend on spectral information. Edge is an important feature for the identification of linear objects (e. g. , roads). The distribution patterns of edges vary greatly among different objects. It is crucial to merge edge statistical information into spectral ones. In this study, a new method that combines spectral information and edge statistical features has been proposed. First, edge detection is conducted by using self-adaptive mean-shift algorithm on the panchromatic band, which can greatly reduce pseudo-edges and noise effects. Then, edge statistical features are obtained from the edge statistical model, which measures the length and angle distribution of edges. Finally, by integrating the spectral and edge statistical features, SVM algorithm is used to classify the image and roads are ultimately extracted. A series of experiments are conducted and the results show that the overall accuracy of proposed method is 93% comparing with only 78% overall accuracy of the traditional. The results demonstrate that the proposed method is efficient and valuable for road extraction, especially on high-resolution images.
The Comparison Between Nmf and Ica in Pigment Mixture Identification of Ancient Chinese Paintings
NASA Astrophysics Data System (ADS)
Liu, Y.; Lyu, S.; Hou, M.; Yin, Q.
2018-04-01
Since the colour in painting cultural relics observed by our naked eyes or hyperspectral cameras is usually a mixture of several kinds of pigments, the mixed pigments analysis will be an important subject in the field of ancient painting conservation and restoration. This paper aims to find a more effective method to confirm the types of every pure pigment from mixture on the surface of paintings. Firstly, we adopted two kinds of blind source separation algorithms, which are independent component analysis and non-negative matrix factorization, to extract the pure pigment component from mixed spectrum respectively. Moreover, we matched the separated pure spectrum with the pigments spectra library built by our team to determine the pigment type. Furthermore, three kinds of data including simulation data, mixed pigments spectral data measured in laboratory, and the spectral data of an ancient painting were chosen to evaluate the performance of the different algorithms. And the accuracy was compared between the two algorithms. Finally, the experimental results show that non-negative matrix factorization method is more suitable for endmember extraction in the field of ancient painting conservation and restoration.
NASA Astrophysics Data System (ADS)
Doha, E.; Bhrawy, A.
2006-06-01
It is well known that spectral methods (tau, Galerkin, collocation) have a condition number of ( is the number of retained modes of polynomial approximations). This paper presents some efficient spectral algorithms, which have a condition number of , based on the Jacobi?Galerkin methods of second-order elliptic equations in one and two space variables. The key to the efficiency of these algorithms is to construct appropriate base functions, which lead to systems with specially structured matrices that can be efficiently inverted. The complexities of the algorithms are a small multiple of operations for a -dimensional domain with unknowns, while the convergence rates of the algorithms are exponentials with smooth solutions.
Combined process automation for large-scale EEG analysis.
Sfondouris, John L; Quebedeaux, Tabitha M; Holdgraf, Chris; Musto, Alberto E
2012-01-01
Epileptogenesis is a dynamic process producing increased seizure susceptibility. Electroencephalography (EEG) data provides information critical in understanding the evolution of epileptiform changes throughout epileptic foci. We designed an algorithm to facilitate efficient large-scale EEG analysis via linked automation of multiple data processing steps. Using EEG recordings obtained from electrical stimulation studies, the following steps of EEG analysis were automated: (1) alignment and isolation of pre- and post-stimulation intervals, (2) generation of user-defined band frequency waveforms, (3) spike-sorting, (4) quantification of spike and burst data and (5) power spectral density analysis. This algorithm allows for quicker, more efficient EEG analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.
Filtered gradient reconstruction algorithm for compressive spectral imaging
NASA Astrophysics Data System (ADS)
Mejia, Yuri; Arguello, Henry
2017-04-01
Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.
Multiway spectral community detection in networks
NASA Astrophysics Data System (ADS)
Zhang, Xiao; Newman, M. E. J.
2015-11-01
One of the most widely used methods for community detection in networks is the maximization of the quality function known as modularity. Of the many maximization techniques that have been used in this context, some of the most conceptually attractive are the spectral methods, which are based on the eigenvectors of the modularity matrix. Spectral algorithms have, however, been limited, by and large, to the division of networks into only two or three communities, with divisions into more than three being achieved by repeated two-way division. Here we present a spectral algorithm that can directly divide a network into any number of communities. The algorithm makes use of a mapping from modularity maximization to a vector partitioning problem, combined with a fast heuristic for vector partitioning. We compare the performance of this spectral algorithm with previous approaches and find it to give superior results, particularly in cases where community sizes are unbalanced. We also give demonstrative applications of the algorithm to two real-world networks and find that it produces results in good agreement with expectations for the networks studied.
Multiobjective generalized extremal optimization algorithm for simulation of daylight illuminants
NASA Astrophysics Data System (ADS)
Kumar, Srividya Ravindra; Kurian, Ciji Pearl; Gomes-Borges, Marcos Eduardo
2017-10-01
Daylight illuminants are widely used as references for color quality testing and optical vision testing applications. Presently used daylight simulators make use of fluorescent bulbs that are not tunable and occupy more space inside the quality testing chambers. By designing a spectrally tunable LED light source with an optimal number of LEDs, cost, space, and energy can be saved. This paper describes an application of the generalized extremal optimization (GEO) algorithm for selection of the appropriate quantity and quality of LEDs that compose the light source. The multiobjective approach of this algorithm tries to get the best spectral simulation with minimum fitness error toward the target spectrum, correlated color temperature (CCT) the same as the target spectrum, high color rendering index (CRI), and luminous flux as required for testing applications. GEO is a global search algorithm based on phenomena of natural evolution and is especially designed to be used in complex optimization problems. Several simulations have been conducted to validate the performance of the algorithm. The methodology applied to model the LEDs, together with the theoretical basis for CCT and CRI calculation, is presented in this paper. A comparative result analysis of M-GEO evolutionary algorithm with the Levenberg-Marquardt conventional deterministic algorithm is also presented.
NASA Technical Reports Server (NTRS)
Matic, Roy M.; Mosley, Judith I.
1994-01-01
Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.
Comparative analysis on the selection of number of clusters in community detection
NASA Astrophysics Data System (ADS)
Kawamoto, Tatsuro; Kabashima, Yoshiyuki
2018-02-01
We conduct a comparative analysis on various estimates of the number of clusters in community detection. An exhaustive comparison requires testing of all possible combinations of frameworks, algorithms, and assessment criteria. In this paper we focus on the framework based on a stochastic block model, and investigate the performance of greedy algorithms, statistical inference, and spectral methods. For the assessment criteria, we consider modularity, map equation, Bethe free energy, prediction errors, and isolated eigenvalues. From the analysis, the tendency of overfit and underfit that the assessment criteria and algorithms have becomes apparent. In addition, we propose that the alluvial diagram is a suitable tool to visualize statistical inference results and can be useful to determine the number of clusters.
Method and apparatus for digitally based high speed x-ray spectrometer
Warburton, W.K.; Hubbard, B.
1997-11-04
A high speed, digitally based, signal processing system which accepts input data from a detector-preamplifier and produces a spectral analysis of the x-rays illuminating the detector. The system achieves high throughputs at low cost by dividing the required digital processing steps between a ``hardwired`` processor implemented in combinatorial digital logic, which detects the presence of the x-ray signals in the digitized data stream and extracts filtered estimates of their amplitudes, and a programmable digital signal processing computer, which refines the filtered amplitude estimates and bins them to produce the desired spectral analysis. One set of algorithms allow this hybrid system to match the resolution of analog systems while operating at much higher data rates. A second set of algorithms implemented in the processor allow the system to be self calibrating as well. The same processor also handles the interface to an external control computer. 19 figs.
Method and apparatus for digitally based high speed x-ray spectrometer
Warburton, William K.; Hubbard, Bradley
1997-01-01
A high speed, digitally based, signal processing system which accepts input data from a detector-preamplifier and produces a spectral analysis of the x-rays illuminating the detector. The system achieves high throughputs at low cost by dividing the required digital processing steps between a "hardwired" processor implemented in combinatorial digital logic, which detects the presence of the x-ray signals in the digitized data stream and extracts filtered estimates of their amplitudes, and a programmable digital signal processing computer, which refines the filtered amplitude estimates and bins them to produce the desired spectral analysis. One set of algorithms allow this hybrid system to match the resolution of analog systems while operating at much higher data rates. A second set of algorithms implemented in the processor allow the system to be self calibrating as well. The same processor also handles the interface to an external control computer.
NASA Astrophysics Data System (ADS)
Oommen, T.; Chatterjee, S.
2017-12-01
NASA and the Indian Space Research Organization (ISRO) are generating Earth surface features data using Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) within 380 to 2500 nm spectral range. This research focuses on the utilization of such data to better understand the mineral potential in India and to demonstrate the application of spectral data in rock type discrimination and mapping for mineral exploration by using automated mapping techniques. The primary focus area of this research is the Hutti-Maski greenstone belt, located in Karnataka, India. The AVIRIS-NG data was integrated with field analyzed data (laboratory scaled compositional analysis, mineralogy, and spectral library) to characterize minerals and rock types. An expert system was developed to produce mineral maps from AVIRIS-NG data automatically. The ground truth data from the study areas was obtained from the existing literature and collaborators from India. The Bayesian spectral unmixing algorithm was used in AVIRIS-NG data for endmember selection. The classification maps of the minerals and rock types were developed using support vector machine algorithm. The ground truth data was used to verify the mineral maps.
Song, Jiajia; Li, Dan; Ma, Xiaoyuan; Teng, Guowei; Wei, Jianming
2017-01-01
Dynamic accurate heart-rate (HR) estimation using a photoplethysmogram (PPG) during intense physical activities is always challenging due to corruption by motion artifacts (MAs). It is difficult to reconstruct a clean signal and extract HR from contaminated PPG. This paper proposes a robust HR-estimation algorithm framework that uses one-channel PPG and tri-axis acceleration data to reconstruct the PPG and calculate the HR based on features of the PPG and spectral analysis. Firstly, the signal is judged by the presence of MAs. Then, the spectral peaks corresponding to acceleration data are filtered from the periodogram of the PPG when MAs exist. Different signal-processing methods are applied based on the amount of remaining PPG spectral peaks. The main MA-removal algorithm (NFEEMD) includes the repeated single-notch filter and ensemble empirical mode decomposition. Finally, HR calibration is designed to ensure the accuracy of HR tracking. The NFEEMD algorithm was performed on the 23 datasets from the 2015 IEEE Signal Processing Cup Database. The average estimation errors were 1.12 BPM (12 training datasets), 2.63 BPM (10 testing datasets) and 1.87 BPM (all 23 datasets), respectively. The Pearson correlation was 0.992. The experiment results illustrate that the proposed algorithm is not only suitable for HR estimation during continuous activities, like slow running (13 training datasets), but also for intense physical activities with acceleration, like arm exercise (10 testing datasets). PMID:29068403
The MEM of spectral analysis applied to L.O.D.
NASA Astrophysics Data System (ADS)
Fernandez, L. I.; Arias, E. F.
The maximum entropy method (MEM) has been widely applied for polar motion studies taking advantage of its performance on the management of complex time series. The authors used the algorithm of the MEM to estimate Cross Spectral function in order to compare interannual Length-of-Day (LOD) time series with Southern Oscillation Index (SOI) and Sea Surface Temperature (SST) series, which are close related to El Niño-Southern Oscillation (ENSO) events.
Tensor Spectral Clustering for Partitioning Higher-order Network Structures.
Benson, Austin R; Gleich, David F; Leskovec, Jure
2015-01-01
Spectral graph theory-based methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a first-order Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higher-order network substructures such as triangles, cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering (TSC) algorithm that allows for modeling higher-order network structures in a graph partitioning framework. Our TSC algorithm allows the user to specify which higher-order network structures (cycles, feed-forward loops, etc.) should be preserved by the network clustering. Higher-order network structures of interest are represented using a tensor, which we then partition by developing a multilinear spectral method. Our framework can be applied to discovering layered flows in networks as well as graph anomaly detection, which we illustrate on synthetic networks. In directed networks, a higher-order structure of particular interest is the directed 3-cycle, which captures feedback loops in networks. We demonstrate that our TSC algorithm produces large partitions that cut fewer directed 3-cycles than standard spectral clustering algorithms.
Tensor Spectral Clustering for Partitioning Higher-order Network Structures
Benson, Austin R.; Gleich, David F.; Leskovec, Jure
2016-01-01
Spectral graph theory-based methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a first-order Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higher-order network substructures such as triangles, cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering (TSC) algorithm that allows for modeling higher-order network structures in a graph partitioning framework. Our TSC algorithm allows the user to specify which higher-order network structures (cycles, feed-forward loops, etc.) should be preserved by the network clustering. Higher-order network structures of interest are represented using a tensor, which we then partition by developing a multilinear spectral method. Our framework can be applied to discovering layered flows in networks as well as graph anomaly detection, which we illustrate on synthetic networks. In directed networks, a higher-order structure of particular interest is the directed 3-cycle, which captures feedback loops in networks. We demonstrate that our TSC algorithm produces large partitions that cut fewer directed 3-cycles than standard spectral clustering algorithms. PMID:27812399
NASA Astrophysics Data System (ADS)
Gao, Yan; Marpu, Prashanth; Morales Manila, Luis M.
2014-11-01
This paper assesses the suitability of 8-band Worldview-2 (WV2) satellite data and object-based random forest algorithm for the classification of avocado growth stages in Mexico. We tested both pixel-based with minimum distance (MD) and maximum likelihood (MLC) and object-based with Random Forest (RF) algorithm for this task. Training samples and verification data were selected by visual interpreting the WV2 images for seven thematic classes: fully grown, middle stage, and early stage of avocado crops, bare land, two types of natural forests, and water body. To examine the contribution of the four new spectral bands of WV2 sensor, all the tested classifications were carried out with and without the four new spectral bands. Classification accuracy assessment results show that object-based classification with RF algorithm obtained higher overall higher accuracy (93.06%) than pixel-based MD (69.37%) and MLC (64.03%) method. For both pixel-based and object-based methods, the classifications with the four new spectral bands (overall accuracy obtained higher accuracy than those without: overall accuracy of object-based RF classification with vs without: 93.06% vs 83.59%, pixel-based MD: 69.37% vs 67.2%, pixel-based MLC: 64.03% vs 36.05%, suggesting that the four new spectral bands in WV2 sensor contributed to the increase of the classification accuracy.
Comparing methods for analysis of biomedical hyperspectral image data
NASA Astrophysics Data System (ADS)
Leavesley, Silas J.; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter F.; Annamdevula, Naga S.; Rich, Thomas C.
2017-02-01
Over the past 2 decades, hyperspectral imaging technologies have been adapted to address the need for molecule-specific identification in the biomedical imaging field. Applications have ranged from single-cell microscopy to whole-animal in vivo imaging and from basic research to clinical systems. Enabling this growth has been the availability of faster, more effective hyperspectral filtering technologies and more sensitive detectors. Hence, the potential for growth of biomedical hyperspectral imaging is high, and many hyperspectral imaging options are already commercially available. However, despite the growth in hyperspectral technologies for biomedical imaging, little work has been done to aid users of hyperspectral imaging instruments in selecting appropriate analysis algorithms. Here, we present an approach for comparing the effectiveness of spectral analysis algorithms by combining experimental image data with a theoretical "what if" scenario. This approach allows us to quantify several key outcomes that characterize a hyperspectral imaging study: linearity of sensitivity, positive detection cut-off slope, dynamic range, and false positive events. We present results of using this approach for comparing the effectiveness of several common spectral analysis algorithms for detecting weak fluorescent protein emission in the midst of strong tissue autofluorescence. Results indicate that this approach should be applicable to a very wide range of applications, allowing a quantitative assessment of the effectiveness of the combined biology, hardware, and computational analysis for detecting a specific molecular signature.
Fast algorithm for spectral mixture analysis of imaging spectrometer data
NASA Astrophysics Data System (ADS)
Schouten, Theo E.; Klein Gebbinck, Maurice S.; Liu, Z. K.; Chen, Shaowei
1996-12-01
Imaging spectrometers acquire images in many narrow spectral bands but have limited spatial resolution. Spectral mixture analysis (SMA) is used to determine the fractions of the ground cover categories (the end-members) present in each pixel. In this paper a new iterative SMA method is presented and tested using a 30 band MAIS image. The time needed for each iteration is independent of the number of bands, thus the method can be used for spectrometers with a large number of bands. Further a new method, based on K-means clustering, for obtaining endmembers from image data is described and compared with existing methods. Using the developed methods the available MAIS image was analyzed using 2 to 6 endmembers.
Combining spatial and spectral information to improve crop/weed discrimination algorithms
NASA Astrophysics Data System (ADS)
Yan, L.; Jones, G.; Villette, S.; Paoli, J. N.; Gée, C.
2012-01-01
Reduction of herbicide spraying is an important key to environmentally and economically improve weed management. To achieve this, remote sensors such as imaging systems are commonly used to detect weed plants. We developed spatial algorithms that detect the crop rows to discriminate crop from weeds. These algorithms have been thoroughly tested and provide robust and accurate results without learning process but their detection is limited to inter-row areas. Crop/Weed discrimination using spectral information is able to detect intra-row weeds but generally needs a prior learning process. We propose a method based on spatial and spectral information to enhance the discrimination and overcome the limitations of both algorithms. The classification from the spatial algorithm is used to build the training set for the spectral discrimination method. With this approach we are able to improve the range of weed detection in the entire field (inter and intra-row). To test the efficiency of these algorithms, a relevant database of virtual images issued from SimAField model has been used and combined to LOPEX93 spectral database. The developed method based is evaluated and compared with the initial method in this paper and shows an important enhancement from 86% of weed detection to more than 95%.
Spectral band selection for classification of soil organic matter content
NASA Technical Reports Server (NTRS)
Henderson, Tracey L.; Szilagyi, Andrea; Baumgardner, Marion F.; Chen, Chih-Chien Thomas; Landgrebe, David A.
1989-01-01
This paper describes the spectral-band-selection (SBS) algorithm of Chen and Landgrebe (1987, 1988, and 1989) and uses the algorithm to classify the organic matter content in the earth's surface soil. The effectiveness of the algorithm was evaluated comparing the results of classification of the soil organic matter using SBS bands with those obtained using Landsat MSS bands and TM bands, showing that the algorithm was successful in finding important spectral bands for classification of organic matter content. Using the calculated bands, the probabilities of correct classification for climate-stratified data were found to range from 0.910 to 0.980.
Detecting communities in large networks
NASA Astrophysics Data System (ADS)
Capocci, A.; Servedio, V. D. P.; Caldarelli, G.; Colaiori, F.
2005-07-01
We develop an algorithm to detect community structure in complex networks. The algorithm is based on spectral methods and takes into account weights and link orientation. Since the method detects efficiently clustered nodes in large networks even when these are not sharply partitioned, it turns to be specially suitable for the analysis of social and information networks. We test the algorithm on a large-scale data-set from a psychological experiment of word association. In this case, it proves to be successful both in clustering words, and in uncovering mental association patterns.
Spectral CT metal artifact reduction with an optimization-based reconstruction algorithm
NASA Astrophysics Data System (ADS)
Gilat Schmidt, Taly; Barber, Rina F.; Sidky, Emil Y.
2017-03-01
Metal objects cause artifacts in computed tomography (CT) images. This work investigated the feasibility of a spectral CT method to reduce metal artifacts. Spectral CT acquisition combined with optimization-based reconstruction is proposed to reduce artifacts by modeling the physical effects that cause metal artifacts and by providing the flexibility to selectively remove corrupted spectral measurements in the spectral-sinogram space. The proposed Constrained `One-Step' Spectral CT Image Reconstruction (cOSSCIR) algorithm directly estimates the basis material maps while enforcing convex constraints. The incorporation of constraints on the reconstructed basis material maps is expected to mitigate undersampling effects that occur when corrupted data is excluded from reconstruction. The feasibility of the cOSSCIR algorithm to reduce metal artifacts was investigated through simulations of a pelvis phantom. The cOSSCIR algorithm was investigated with and without the use of a third basis material representing metal. The effects of excluding data corrupted by metal were also investigated. The results demonstrated that the proposed cOSSCIR algorithm reduced metal artifacts and improved CT number accuracy. For example, CT number error in a bright shading artifact region was reduced from 403 HU in the reference filtered backprojection reconstruction to 33 HU using the proposed algorithm in simulation. In the dark shading regions, the error was reduced from 1141 HU to 25 HU. Of the investigated approaches, decomposing the data into three basis material maps and excluding the corrupted data demonstrated the greatest reduction in metal artifacts.
Urban land use of the Sao Paulo metropolitan area by automatic analysis of LANDSAT data
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Niero, M.; Foresti, C.
1983-01-01
The separability of urban land use classes in the metropolitan area of Sao Paulo was studied by means of automatic analysis of MSS/LANDSAT digital data. The data were analyzed using the media K and MAXVER classification algorithms. The land use classes obtained were: CBD/vertical growth area, residential area, mixed area, industrial area, embankment area type 1, embankment area type 2, dense vegetation area and sparse vegetation area. The spectral analysis of representative samples of urban land use classes was done using the "Single Cell" analysis option. The classes CBD/vertical growth area, residential area and embankment area type 2 showed better spectral separability when compared to the other classes.
Blind source separation of ex-vivo aorta tissue multispectral images
Galeano, July; Perez, Sandra; Montoya, Yonatan; Botina, Deivid; Garzón, Johnson
2015-01-01
Blind Source Separation methods (BSS) aim for the decomposition of a given signal in its main components or source signals. Those techniques have been widely used in the literature for the analysis of biomedical images, in order to extract the main components of an organ or tissue under study. The analysis of skin images for the extraction of melanin and hemoglobin is an example of the use of BSS. This paper presents a proof of concept of the use of source separation of ex-vivo aorta tissue multispectral Images. The images are acquired with an interference filter-based imaging system. The images are processed by means of two algorithms: Independent Components analysis and Non-negative Matrix Factorization. In both cases, it is possible to obtain maps that quantify the concentration of the main chromophores present in aortic tissue. Also, the algorithms allow for spectral absorbance of the main tissue components. Those spectral signatures were compared against the theoretical ones by using correlation coefficients. Those coefficients report values close to 0.9, which is a good estimator of the method’s performance. Also, correlation coefficients lead to the identification of the concentration maps according to the evaluated chromophore. The results suggest that Multi/hyper-spectral systems together with image processing techniques is a potential tool for the analysis of cardiovascular tissue. PMID:26137366
FIVQ algorithm for interference hyper-spectral image compression
NASA Astrophysics Data System (ADS)
Wen, Jia; Ma, Caiwen; Zhao, Junsuo
2014-07-01
Based on the improved vector quantization (IVQ) algorithm [1] which was proposed in 2012, this paper proposes a further improved vector quantization (FIVQ) algorithm for LASIS (Large Aperture Static Imaging Spectrometer) interference hyper-spectral image compression. To get better image quality, IVQ algorithm takes both the mean values and the VQ indices as the encoding rules. Although IVQ algorithm can improve both the bit rate and the image quality, it still can be further improved in order to get much lower bit rate for the LASIS interference pattern with the special optical characteristics based on the pushing and sweeping in LASIS imaging principle. In the proposed algorithm FIVQ, the neighborhood of the encoding blocks of the interference pattern image, which are using the mean value rules, will be checked whether they have the same mean value as the current processing block. Experiments show the proposed algorithm FIVQ can get lower bit rate compared to that of the IVQ algorithm for the LASIS interference hyper-spectral sequences.
icoshift: A versatile tool for the rapid alignment of 1D NMR spectra
NASA Astrophysics Data System (ADS)
Savorani, F.; Tomasi, G.; Engelsen, S. B.
2010-02-01
The increasing scientific and industrial interest towards metabonomics takes advantage from the high qualitative and quantitative information level of nuclear magnetic resonance (NMR) spectroscopy. However, several chemical and physical factors can affect the absolute and the relative position of an NMR signal and it is not always possible or desirable to eliminate these effects a priori. To remove misalignment of NMR signals a posteriori, several algorithms have been proposed in the literature. The icoshift program presented here is an open source and highly efficient program designed for solving signal alignment problems in metabonomic NMR data analysis. The icoshift algorithm is based on correlation shifting of spectral intervals and employs an FFT engine that aligns all spectra simultaneously. The algorithm is demonstrated to be faster than similar methods found in the literature making full-resolution alignment of large datasets feasible and thus avoiding down-sampling steps such as binning. The algorithm uses missing values as a filling alternative in order to avoid spectral artifacts at the segment boundaries. The algorithm is made open source and the Matlab code including documentation can be downloaded from www.models.life.ku.dk.
Finite grid instability and spectral fidelity of the electrostatic Particle-In-Cell algorithm
Huang, C. -K.; Zeng, Y.; Wang, Y.; ...
2016-10-01
The origin of the Finite Grid Instability (FGI) is studied by resolving the dynamics in the 1D electrostatic Particle-In-Cell (PIC) model in the spectral domain at the single particle level and at the collective motion level. The spectral fidelity of the PIC model is contrasted with the underlying physical system or the gridless model. The systematic spectral phase and amplitude errors from the charge deposition and field interpolation are quantified for common particle shapes used in the PIC models. Lastly, it is shown through such analysis and in simulations that the lack of spectral fidelity relative to the physical systemmore » due to the existence of aliased spatial modes is the major cause of the FGI in the PIC model.« less
Finite grid instability and spectral fidelity of the electrostatic Particle-In-Cell algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, C. -K.; Zeng, Y.; Wang, Y.
The origin of the Finite Grid Instability (FGI) is studied by resolving the dynamics in the 1D electrostatic Particle-In-Cell (PIC) model in the spectral domain at the single particle level and at the collective motion level. The spectral fidelity of the PIC model is contrasted with the underlying physical system or the gridless model. The systematic spectral phase and amplitude errors from the charge deposition and field interpolation are quantified for common particle shapes used in the PIC models. Lastly, it is shown through such analysis and in simulations that the lack of spectral fidelity relative to the physical systemmore » due to the existence of aliased spatial modes is the major cause of the FGI in the PIC model.« less
Speech enhancement based on modified phase-opponency detectors
NASA Astrophysics Data System (ADS)
Deshmukh, Om D.; Espy-Wilson, Carol Y.
2005-09-01
A speech enhancement algorithm based on a neural model was presented by Deshmukh et al., [149th meeting of the Acoustical Society America, 2005]. The algorithm consists of a bank of Modified Phase Opponency (MPO) filter pairs tuned to different center frequencies. This algorithm is able to enhance salient spectral features in speech signals even at low signal-to-noise ratios. However, the algorithm introduces musical noise and sometimes misses a spectral peak that is close in frequency to a stronger spectral peak. Refinement in the design of the MPO filters was recently made that takes advantage of the falling spectrum of the speech signal in sonorant regions. The modified set of filters leads to better separation of the noise and speech signals, and more accurate enhancement of spectral peaks. The improvements also lead to a significant reduction in musical noise. Continuity algorithms based on the properties of speech signals are used to further reduce the musical noise effect. The efficiency of the proposed method in enhancing the speech signal when the level of the background noise is fluctuating will be demonstrated. The performance of the improved speech enhancement method will be compared with various spectral subtraction-based methods. [Work supported by NSF BCS0236707.
NASA Astrophysics Data System (ADS)
Chandra, Malavika; Scheiman, James; Simeone, Diane; McKenna, Barbara; Purdy, Julianne; Mycek, Mary-Ann
2010-01-01
Pancreatic adenocarcinoma is one of the leading causes of cancer death, in part because of the inability of current diagnostic methods to reliably detect early-stage disease. We present the first assessment of the diagnostic accuracy of algorithms developed for pancreatic tissue classification using data from fiber optic probe-based bimodal optical spectroscopy, a real-time approach that would be compatible with minimally invasive diagnostic procedures for early cancer detection in the pancreas. A total of 96 fluorescence and 96 reflectance spectra are considered from 50 freshly excised tissue sites-including human pancreatic adenocarcinoma, chronic pancreatitis (inflammation), and normal tissues-on nine patients. Classification algorithms using linear discriminant analysis are developed to distinguish among tissues, and leave-one-out cross-validation is employed to assess the classifiers' performance. The spectral areas and ratios classifier (SpARC) algorithm employs a combination of reflectance and fluorescence data and has the best performance, with sensitivity, specificity, negative predictive value, and positive predictive value for correctly identifying adenocarcinoma being 85, 89, 92, and 80%, respectively.
Endmember extraction from hyperspectral image based on discrete firefly algorithm (EE-DFA)
NASA Astrophysics Data System (ADS)
Zhang, Chengye; Qin, Qiming; Zhang, Tianyuan; Sun, Yuanheng; Chen, Chao
2017-04-01
This study proposed a novel method to extract endmembers from hyperspectral image based on discrete firefly algorithm (EE-DFA). Endmembers are the input of many spectral unmixing algorithms. Hence, in this paper, endmember extraction from hyperspectral image is regarded as a combinational optimization problem to get best spectral unmixing results, which can be solved by the discrete firefly algorithm. Two series of experiments were conducted on the synthetic hyperspectral datasets with different SNR and the AVIRIS Cuprite dataset, respectively. The experimental results were compared with the endmembers extracted by four popular methods: the sequential maximum angle convex cone (SMACC), N-FINDR, Vertex Component Analysis (VCA), and Minimum Volume Constrained Nonnegative Matrix Factorization (MVC-NMF). What's more, the effect of the parameters in the proposed method was tested on both synthetic hyperspectral datasets and AVIRIS Cuprite dataset, and the recommended parameters setting was proposed. The results in this study demonstrated that the proposed EE-DFA method showed better performance than the existing popular methods. Moreover, EE-DFA is robust under different SNR conditions.
Jothi, R; Mohanty, Sraban Kumar; Ojha, Aparajita
2016-04-01
Gene expression data clustering is an important biological process in DNA microarray analysis. Although there have been many clustering algorithms for gene expression analysis, finding a suitable and effective clustering algorithm is always a challenging problem due to the heterogeneous nature of gene profiles. Minimum Spanning Tree (MST) based clustering algorithms have been successfully employed to detect clusters of varying shapes and sizes. This paper proposes a novel clustering algorithm using Eigenanalysis on Minimum Spanning Tree based neighborhood graph (E-MST). As MST of a set of points reflects the similarity of the points with their neighborhood, the proposed algorithm employs a similarity graph obtained from k(') rounds of MST (k(')-MST neighborhood graph). By studying the spectral properties of the similarity matrix obtained from k(')-MST graph, the proposed algorithm achieves improved clustering results. We demonstrate the efficacy of the proposed algorithm on 12 gene expression datasets. Experimental results show that the proposed algorithm performs better than the standard clustering algorithms. Copyright © 2016 Elsevier Ltd. All rights reserved.
Spatial-spectral preprocessing for endmember extraction on GPU's
NASA Astrophysics Data System (ADS)
Jimenez, Luis I.; Plaza, Javier; Plaza, Antonio; Li, Jun
2016-10-01
Spectral unmixing is focused in the identification of spectrally pure signatures, called endmembers, and their corresponding abundances in each pixel of a hyperspectral image. Mainly focused on the spectral information contained in the hyperspectral images, endmember extraction techniques have recently included spatial information to achieve more accurate results. Several algorithms have been developed for automatic or semi-automatic identification of endmembers using spatial and spectral information, including the spectral-spatial endmember extraction (SSEE) where, within a preprocessing step in the technique, both sources of information are extracted from the hyperspectral image and equally used for this purpose. Previous works have implemented the SSEE technique in four main steps: 1) local eigenvectors calculation in each sub-region in which the original hyperspectral image is divided; 2) computation of the maxima and minima projection of all eigenvectors over the entire hyperspectral image in order to obtain a candidates pixels set; 3) expansion and averaging of the signatures of the candidate set; 4) ranking based on the spectral angle distance (SAD). The result of this method is a list of candidate signatures from which the endmembers can be extracted using various spectral-based techniques, such as orthogonal subspace projection (OSP), vertex component analysis (VCA) or N-FINDR. Considering the large volume of data and the complexity of the calculations, there is a need for efficient implementations. Latest- generation hardware accelerators such as commodity graphics processing units (GPUs) offer a good chance for improving the computational performance in this context. In this paper, we develop two different implementations of the SSEE algorithm using GPUs. Both are based on the eigenvectors computation within each sub-region of the first step, one using the singular value decomposition (SVD) and another one using principal component analysis (PCA). Based on our experiments with hyperspectral data sets, high computational performance is observed in both cases.
NASA Astrophysics Data System (ADS)
Bravo, Jaime; Davis, Scott C.; Roberts, David W.; Paulsen, Keith D.; Kanick, Stephen C.
2015-03-01
Quantification of targeted fluorescence markers during neurosurgery has the potential to improve and standardize surgical distinction between normal and cancerous tissues. However, quantitative analysis of marker fluorescence is complicated by tissue background absorption and scattering properties. Correction algorithms that transform raw fluorescence intensity into quantitative units, independent of absorption and scattering, require a paired measurement of localized white light reflectance to provide estimates of the optical properties. This study focuses on the unique problem of developing a spectral analysis algorithm to extract tissue absorption and scattering properties from white light spectra that contain contributions from both elastically scattered photons and fluorescence emission from a strong fluorophore (i.e. fluorescein). A fiber-optic reflectance device was used to perform measurements in a small set of optical phantoms, constructed with Intralipid (1% lipid), whole blood (1% volume fraction) and fluorescein (0.16-10 μg/mL). Results show that the novel spectral analysis algorithm yields accurate estimates of tissue parameters independent of fluorescein concentration, with relative errors of blood volume fraction, blood oxygenation fraction (BOF), and the reduced scattering coefficient (at 521 nm) of <7%, <1%, and <22%, respectively. These data represent a first step towards quantification of fluorescein in tissue in vivo.
Multi-pass encoding of hyperspectral imagery with spectral quality control
NASA Astrophysics Data System (ADS)
Wasson, Steven; Walker, William
2015-05-01
Multi-pass encoding is a technique employed in the field of video compression that maximizes the quality of an encoded video sequence within the constraints of a specified bit rate. This paper presents research where multi-pass encoding is extended to the field of hyperspectral image compression. Unlike video, which is primarily intended to be viewed by a human observer, hyperspectral imagery is processed by computational algorithms that generally attempt to classify the pixel spectra within the imagery. As such, these algorithms are more sensitive to distortion in the spectral dimension of the image than they are to perceptual distortion in the spatial dimension. The compression algorithm developed for this research, which uses the Karhunen-Loeve transform for spectral decorrelation followed by a modified H.264/Advanced Video Coding (AVC) encoder, maintains a user-specified spectral quality level while maximizing the compression ratio throughout the encoding process. The compression performance may be considered near-lossless in certain scenarios. For qualitative purposes, this paper presents the performance of the compression algorithm for several Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Hyperion datasets using spectral angle as the spectral quality assessment function. Specifically, the compression performance is illustrated in the form of rate-distortion curves that plot spectral angle versus bits per pixel per band (bpppb).
Accuracy Improvement for Light-Emitting-Diode-Based Colorimeter by Iterative Algorithm
NASA Astrophysics Data System (ADS)
Yang, Pao-Keng
2011-09-01
We present a simple algorithm, combining an interpolating method with an iterative calculation, to enhance the resolution of spectral reflectance by removing the spectral broadening effect due to the finite bandwidth of the light-emitting diode (LED) from it. The proposed algorithm can be used to improve the accuracy of a reflective colorimeter using multicolor LEDs as probing light sources and is also applicable to the case when the probing LEDs have different bandwidths in different spectral ranges, to which the powerful deconvolution method cannot be applied.
DoE Phase II SBIR: Spectrally-Assisted Vehicle Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villeneuve, Pierre V.
2013-02-28
The goal of this Phase II SBIR is to develop a prototype software package to demonstrate spectrally-aided vehicle tracking performance. The primary application is to demonstrate improved target vehicle tracking performance in complex environments where traditional spatial tracker systems may show reduced performance. Example scenarios in Figure 1 include a) the target vehicle obscured by a large structure for an extended period of time, or b), the target engaging in extreme maneuvers amongst other civilian vehicles. The target information derived from spatial processing is unable to differentiate between the green versus the red vehicle. Spectral signature exploitation enables comparison ofmore » new candidate targets with existing track signatures. The ambiguity in this confusing scenario is resolved by folding spectral analysis results into each target nomination and association processes. Figure 3 shows a number of example spectral signatures from a variety of natural and man-made materials. The work performed over the two-year effort was divided into three general areas: algorithm refinement, software prototype development, and prototype performance demonstration. The tasks performed under this Phase II to accomplish the program goals were as follows: 1. Acquire relevant vehicle target datasets to support prototype. 2. Refine algorithms for target spectral feature exploitation. 3. Implement a prototype multi-hypothesis target tracking software package. 4. Demonstrate and quantify tracking performance using relevant data.« less
NASA Astrophysics Data System (ADS)
Cheng, Liantao; Zhang, Fenghui; Kang, Xiaoyu; Wang, Lang
2018-05-01
In evolutionary population synthesis (EPS) models, we need to convert stellar evolutionary parameters into spectra via interpolation in a stellar spectral library. For theoretical stellar spectral libraries, the spectrum grid is homogeneous on the effective-temperature and gravity plane for a given metallicity. It is relatively easy to derive stellar spectra. For empirical stellar spectral libraries, stellar parameters are irregularly distributed and the interpolation algorithm is relatively complicated. In those EPS models that use empirical stellar spectral libraries, different algorithms are used and the codes are often not released. Moreover, these algorithms are often complicated. In this work, based on a radial basis function (RBF) network, we present a new spectrum interpolation algorithm and its code. Compared with the other interpolation algorithms that are used in EPS models, it can be easily understood and is highly efficient in terms of computation. The code is written in MATLAB scripts and can be used on any computer system. Using it, we can obtain the interpolated spectra from a library or a combination of libraries. We apply this algorithm to several stellar spectral libraries (such as MILES, ELODIE-3.1 and STELIB-3.2) and give the integrated spectral energy distributions (ISEDs) of stellar populations (with ages from 1 Myr to 14 Gyr) by combining them with Yunnan-III isochrones. Our results show that the differences caused by the adoption of different EPS model components are less than 0.2 dex. All data about the stellar population ISEDs in this work and the RBF spectrum interpolation code can be obtained by request from the first author or downloaded from http://www1.ynao.ac.cn/˜zhangfh.
A Spectral Algorithm for Envelope Reduction of Sparse Matrices
NASA Technical Reports Server (NTRS)
Barnard, Stephen T.; Pothen, Alex; Simon, Horst D.
1993-01-01
The problem of reordering a sparse symmetric matrix to reduce its envelope size is considered. A new spectral algorithm for computing an envelope-reducing reordering is obtained by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. This Laplacian eigenvector solves a continuous relaxation of a discrete problem related to envelope minimization called the minimum 2-sum problem. The permutation vector computed by the spectral algorithm is a closest permutation vector to the specified Laplacian eigenvector. Numerical results show that the new reordering algorithm usually computes smaller envelope sizes than those obtained from the current standard algorithms such as Gibbs-Poole-Stockmeyer (GPS) or SPARSPAK reverse Cuthill-McKee (RCM), in some cases reducing the envelope by more than a factor of two.
NASA Technical Reports Server (NTRS)
Phinney, D. E. (Principal Investigator)
1980-01-01
An algorithm for estimating spectral crop calendar shifts of spring small grains was applied to 1978 spring wheat fields. The algorithm provides estimates of the date of peak spectral response by maximizing the cross correlation between a reference profile and the observed multitemporal pattern of Kauth-Thomas greenness for a field. A methodology was developed for estimation of crop development stage from the date of peak spectral response. Evaluation studies showed that the algorithm provided stable estimates with no geographical bias. Crop development stage estimates had a root mean square error near 10 days. The algorithm was recommended for comparative testing against other models which are candidates for use in AgRISTARS experiments.
Graph Based Models for Unsupervised High Dimensional Data Clustering and Network Analysis
2015-01-01
ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for...algorithms we proposed improve the time e ciency signi cantly for large scale datasets. In the last chapter, we also propose an incremental reseeding...plume detection in hyper-spectral video data. These graph based clustering algorithms we proposed improve the time efficiency significantly for large
Spectral analysis of two-signed microarray expression data.
Higham, Desmond J; Kalna, Gabriela; Vass, J Keith
2007-06-01
We give a simple and informative derivation of a spectral algorithm for clustering and reordering complementary DNA microarray expression data. Here, expression levels of a set of genes are recorded simultaneously across a number of samples, with a positive weight reflecting up-regulation and a negative weight reflecting down-regulation. We give theoretical support for the algorithm based on a biologically justified hypothesis about the structure of the data, and illustrate its use on public domain data in the context of unsupervised tumour classification. The algorithm is derived by considering a discrete optimization problem and then relaxing to the continuous realm. We prove that in the case where the data have an inherent 'checkerboard' sign pattern, the algorithm will automatically reveal that pattern. Further, our derivation shows that the algorithm may be regarded as imposing a random graph model on the expression levels and then clustering from a maximum likelihood perspective. This indicates that the output will be tolerant to perturbations and will reveal 'near-checkerboard' patterns when these are present in the data. It is interesting to note that the checkerboard structure is revealed by the first (dominant) singular vectors--previous work on spectral methods has focussed on the case of nonnegative edge weights, where only the second and higher singular vectors are relevant. We illustrate the algorithm on real and synthetic data, and then use it in a tumour classification context on three different cancer data sets. Our results show that respecting the two-signed nature of the data (thereby distinguishing between up-regulation and down-regulation) reveals structures that cannot be gleaned from the absolute value data (where up- and down-regulation are both regarded as 'changes').
NASA Technical Reports Server (NTRS)
Gulick, V. C.; Morris, R. L.; Bishop, J.; Gazis, P.; Alena, R.; Sierhuis, M.
2002-01-01
We are developing science analyses algorithms to interface with a Geologist's Field Assistant device to allow robotic or human remote explorers to better sense their surroundings during limited surface excursions. Our algorithms will interpret spectral and imaging data obtained by various sensors. Additional information is contained in the original extended abstract.
Spectral methods in machine learning and new strategies for very large datasets
Belabbas, Mohamed-Ali; Wolfe, Patrick J.
2009-01-01
Spectral methods are of fundamental importance in statistics and machine learning, because they underlie algorithms from classical principal components analysis to more recent approaches that exploit manifold structure. In most cases, the core technical problem can be reduced to computing a low-rank approximation to a positive-definite kernel. For the growing number of applications dealing with very large or high-dimensional datasets, however, the optimal approximation afforded by an exact spectral decomposition is too costly, because its complexity scales as the cube of either the number of training examples or their dimensionality. Motivated by such applications, we present here 2 new algorithms for the approximation of positive-semidefinite kernels, together with error bounds that improve on results in the literature. We approach this problem by seeking to determine, in an efficient manner, the most informative subset of our data relative to the kernel approximation task at hand. This leads to two new strategies based on the Nyström method that are directly applicable to massive datasets. The first of these—based on sampling—leads to a randomized algorithm whereupon the kernel induces a probability distribution on its set of partitions, whereas the latter approach—based on sorting—provides for the selection of a partition in a deterministic way. We detail their numerical implementation and provide simulation results for a variety of representative problems in statistical data analysis, each of which demonstrates the improved performance of our approach relative to existing methods. PMID:19129490
NASA Astrophysics Data System (ADS)
Kostyuchenko, Yuriy V.; Sztoyka, Yulia; Kopachevsky, Ivan; Artemenko, Igor; Yuschenko, Maxim
2017-10-01
Multi-model approach for remote sensing data processing and interpretation is described. The problem of satellite data utilization in multi-modeling approach for socio-ecological risks assessment is formally defined. Observation, measurement and modeling data utilization method in the framework of multi-model approach is described. Methodology and models of risk assessment in framework of decision support approach are defined and described. Method of water quality assessment using satellite observation data is described. Method is based on analysis of spectral reflectance of aquifers. Spectral signatures of freshwater bodies and offshores are analyzed. Correlations between spectral reflectance, pollutions and selected water quality parameters are analyzed and quantified. Data of MODIS, MISR, AIRS and Landsat sensors received in 2002-2014 have been utilized verified by in-field spectrometry and lab measurements. Fuzzy logic based approach for decision support in field of water quality degradation risk is discussed. Decision on water quality category is making based on fuzzy algorithm using limited set of uncertain parameters. Data from satellite observations, field measurements and modeling is utilizing in the framework of the approach proposed. It is shown that this algorithm allows estimate water quality degradation rate and pollution risks. Problems of construction of spatial and temporal distribution of calculated parameters, as well as a problem of data regularization are discussed. Using proposed approach, maps of surface water pollution risk from point and diffuse sources are calculated and discussed.
pyblocxs: Bayesian Low-Counts X-ray Spectral Analysis in Sherpa
NASA Astrophysics Data System (ADS)
Siemiginowska, A.; Kashyap, V.; Refsdal, B.; van Dyk, D.; Connors, A.; Park, T.
2011-07-01
Typical X-ray spectra have low counts and should be modeled using the Poisson distribution. However, χ2 statistic is often applied as an alternative and the data are assumed to follow the Gaussian distribution. A variety of weights to the statistic or a binning of the data is performed to overcome the low counts issues. However, such modifications introduce biases or/and a loss of information. Standard modeling packages such as XSPEC and Sherpa provide the Poisson likelihood and allow computation of rudimentary MCMC chains, but so far do not allow for setting a full Bayesian model. We have implemented a sophisticated Bayesian MCMC-based algorithm to carry out spectral fitting of low counts sources in the Sherpa environment. The code is a Python extension to Sherpa and allows to fit a predefined Sherpa model to high-energy X-ray spectral data and other generic data. We present the algorithm and discuss several issues related to the implementation, including flexible definition of priors and allowing for variations in the calibration information.
Gao, Bo-Cai; Liu, Ming
2013-01-01
Surface reflectance spectra retrieved from remotely sensed hyperspectral imaging data using radiative transfer models often contain residual atmospheric absorption and scattering effects. The reflectance spectra may also contain minor artifacts due to errors in radiometric and spectral calibrations. We have developed a fast smoothing technique for post-processing of retrieved surface reflectance spectra. In the present spectral smoothing technique, model-derived reflectance spectra are first fit using moving filters derived with a cubic spline smoothing algorithm. A common gain curve, which contains minor artifacts in the model-derived reflectance spectra, is then derived. This gain curve is finally applied to all of the reflectance spectra in a scene to obtain the spectrally smoothed surface reflectance spectra. Results from analysis of hyperspectral imaging data collected with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data are given. Comparisons between the smoothed spectra and those derived with the empirical line method are also presented. PMID:24129022
An algorithm for extraction of periodic signals from sparse, irregularly sampled data
NASA Technical Reports Server (NTRS)
Wilcox, J. Z.
1994-01-01
Temporal gaps in discrete sampling sequences produce spurious Fourier components at the intermodulation frequencies of an oscillatory signal and the temporal gaps, thus significantly complicating spectral analysis of such sparsely sampled data. A new fast Fourier transform (FFT)-based algorithm has been developed, suitable for spectral analysis of sparsely sampled data with a relatively small number of oscillatory components buried in background noise. The algorithm's principal idea has its origin in the so-called 'clean' algorithm used to sharpen images of scenes corrupted by atmospheric and sensor aperture effects. It identifies as the signal's 'true' frequency that oscillatory component which, when passed through the same sampling sequence as the original data, produces a Fourier image that is the best match to the original Fourier space. The algorithm has generally met with succession trials with simulated data with a low signal-to-noise ratio, including those of a type similar to hourly residuals for Earth orientation parameters extracted from VLBI data. For eight oscillatory components in the diurnal and semidiurnal bands, all components with an amplitude-noise ratio greater than 0.2 were successfully extracted for all sequences and duty cycles (greater than 0.1) tested; the amplitude-noise ratios of the extracted signals were as low as 0.05 for high duty cycles and long sampling sequences. When, in addition to these high frequencies, strong low-frequency components are present in the data, the low-frequency components are generally eliminated first, by employing a version of the algorithm that searches for non-integer multiples of the discrete FET minimum frequency.
NASA Astrophysics Data System (ADS)
Jiang, Kaili; Zhu, Jun; Tang, Bin
2017-12-01
Periodic nonuniform sampling occurs in many applications, and the Nyquist folding receiver (NYFR) is an efficient, low complexity, and broadband spectrum sensing architecture. In this paper, we first derive that the radio frequency (RF) sample clock function of NYFR is periodic nonuniform. Then, the classical results of periodic nonuniform sampling are applied to NYFR. We extend the spectral reconstruction algorithm of time series decomposed model to the subsampling case by using the spectrum characteristics of NYFR. The subsampling case is common for broadband spectrum surveillance. Finally, we take example for a LFM signal under large bandwidth to verify the proposed algorithm and compare the spectral reconstruction algorithm with orthogonal matching pursuit (OMP) algorithm.
Inverse Analysis of Irradiated NuclearMaterial Gamma Spectra via Nonlinear Optimization
NASA Astrophysics Data System (ADS)
Dean, Garrett James
Nuclear forensics is the collection of technical methods used to identify the provenance of nuclear material interdicted outside of regulatory control. Techniques employed in nuclear forensics include optical microscopy, gas chromatography, mass spectrometry, and alpha, beta, and gamma spectrometry. This dissertation focuses on the application of inverse analysis to gamma spectroscopy to estimate the history of pulse irradiated nuclear material. Previous work in this area has (1) utilized destructive analysis techniques to supplement the nondestructive gamma measurements, and (2) been applied to samples composed of spent nuclear fuel with long irradiation and cooling times. Previous analyses have employed local nonlinear solvers, simple empirical models of gamma spectral features, and simple detector models of gamma spectral features. The algorithm described in this dissertation uses a forward model of the irradiation and measurement process within a global nonlinear optimizer to estimate the unknown irradiation history of pulse irradiated nuclear material. The forward model includes a detector response function for photopeaks only. The algorithm uses a novel hybrid global and local search algorithm to quickly estimate the irradiation parameters, including neutron fluence, cooling time and original composition. Sequential, time correlated series of measurements are used to reduce the uncertainty in the estimated irradiation parameters. This algorithm allows for in situ measurements of interdicted irradiated material. The increase in analysis speed comes with a decrease in information that can be determined, but the sample fluence, cooling time, and composition can be determined within minutes of a measurement. Furthermore, pulse irradiated nuclear material has a characteristic feature that irradiation time and flux cannot be independently estimated. The algorithm has been tested against pulse irradiated samples of pure special nuclear material with cooling times of four minutes to seven hours. The algorithm described is capable of determining the cooling time and fluence the sample was exposed to within 10% as well as roughly estimating the relative concentrations of nuclides present in the original composition.
Highlights of TOMS Version 9 Total Ozone Algorithm
NASA Technical Reports Server (NTRS)
Bhartia, Pawan; Haffner, David
2012-01-01
The fundamental basis of TOMS total ozone algorithm was developed some 45 years ago by Dave and Mateer. It was designed to estimate total ozone from satellite measurements of the backscattered UV radiances at few discrete wavelengths in the Huggins ozone absorption band (310-340 nm). Over the years, as the need for higher accuracy in measuring total ozone from space has increased, several improvements to the basic algorithms have been made. They include: better correction for the effects of aerosols and clouds, an improved method to account for the variation in shape of ozone profiles with season, latitude, and total ozone, and a multi-wavelength correction for remaining profile shape errors. These improvements have made it possible to retrieve total ozone with just 3 spectral channels of moderate spectral resolution (approx. 1 nm) with accuracy comparable to state-of-the-art spectral fitting algorithms like DOAS that require high spectral resolution measurements at large number of wavelengths. One of the deficiencies of the TOMS algorithm has been that it doesn't provide an error estimate. This is a particular problem in high latitudes when the profile shape errors become significant and vary with latitude, season, total ozone, and instrument viewing geometry. The primary objective of the TOMS V9 algorithm is to account for these effects in estimating the error bars. This is done by a straightforward implementation of the Rodgers optimum estimation method using a priori ozone profiles and their error covariances matrices constructed using Aura MLS and ozonesonde data. The algorithm produces a vertical ozone profile that contains 1-2.5 pieces of information (degrees of freedom of signal) depending upon solar zenith angle (SZA). The profile is integrated to obtain the total column. We provide information that shows the altitude range in which the profile is best determined by the measurements. One can use this information in data assimilation and analysis. A side benefit of this algorithm is that it is considerably simpler than the present algorithm that uses a database of 1512 profiles to retrieve total ozone. These profiles are tedious to construct and modify. Though conceptually similar to the SBUV V8 algorithm that was developed about a decade ago, the SBUV and TOMS V9 algorithms differ in detail. The TOMS algorithm uses 3 wavelengths to retrieve the profile while the SBUV algorithm uses 6-9 wavelengths, so TOMS provides less profile information. However both algorithms have comparable total ozone information and TOMS V9 can be easily adapted to use additional wavelengths from instruments like GOME, OMI and OMPS to provide better profile information at smaller SZAs. The other significant difference between the two algorithms is that while the SBUV algorithm has been optimized for deriving monthly zonal means by making an appropriate choice of the a priori error covariance matrix, the TOMS algorithm has been optimized for tracking short-term variability using month and latitude dependent covariance matrices.
NASA Astrophysics Data System (ADS)
Plaza, Antonio; Chang, Chein-I.; Plaza, Javier; Valencia, David
2006-05-01
The incorporation of hyperspectral sensors aboard airborne/satellite platforms is currently producing a nearly continual stream of multidimensional image data, and this high data volume has soon introduced new processing challenges. The price paid for the wealth spatial and spectral information available from hyperspectral sensors is the enormous amounts of data that they generate. Several applications exist, however, where having the desired information calculated quickly enough for practical use is highly desirable. High computing performance of algorithm analysis is particularly important in homeland defense and security applications, in which swift decisions often involve detection of (sub-pixel) military targets (including hostile weaponry, camouflage, concealment, and decoys) or chemical/biological agents. In order to speed-up computational performance of hyperspectral imaging algorithms, this paper develops several fast parallel data processing techniques. Techniques include four classes of algorithms: (1) unsupervised classification, (2) spectral unmixing, and (3) automatic target recognition, and (4) onboard data compression. A massively parallel Beowulf cluster (Thunderhead) at NASA's Goddard Space Flight Center in Maryland is used to measure parallel performance of the proposed algorithms. In order to explore the viability of developing onboard, real-time hyperspectral data compression algorithms, a Xilinx Virtex-II field programmable gate array (FPGA) is also used in experiments. Our quantitative and comparative assessment of parallel techniques and strategies may help image analysts in selection of parallel hyperspectral algorithms for specific applications.
NASA Astrophysics Data System (ADS)
Hong, Liu; Qu, Yongzhi; Dhupia, Jaspreet Singh; Sheng, Shuangwen; Tan, Yuegang; Zhou, Zude
2017-09-01
The localized failures of gears introduce cyclic-transient impulses in the measured gearbox vibration signals. These impulses are usually identified from the sidebands around gear-mesh harmonics through the spectral analysis of cyclo-stationary signals. However, in practice, several high-powered applications of gearboxes like wind turbines are intrinsically characterized by nonstationary processes that blur the measured vibration spectra of a gearbox and deteriorate the efficacy of spectral diagnostic methods. Although order-tracking techniques have been proposed to improve the performance of spectral diagnosis for nonstationary signals measured in such applications, the required hardware for the measurement of rotational speed of these machines is often unavailable in industrial settings. Moreover, existing tacho-less order-tracking approaches are usually limited by the high time-frequency resolution requirement, which is a prerequisite for the precise estimation of the instantaneous frequency. To address such issues, a novel fault-signature enhancement algorithm is proposed that can alleviate the spectral smearing without the need of rotational speed measurement. This proposed tacho-less diagnostic technique resamples the measured acceleration signal of the gearbox based on the optimal warping path evaluated from the fast dynamic time-warping algorithm, which aligns a filtered shaft rotational harmonic signal with respect to a reference signal assuming a constant shaft rotational speed estimated from the approximation of operational speed. The effectiveness of this method is validated using both simulated signals from a fixed-axis gear pair under nonstationary conditions and experimental measurements from a 750-kW planetary wind turbine gearbox on a dynamometer test rig. The results demonstrate that the proposed algorithm can identify fault information from typical gearbox vibration measurements carried out in a resource-constrained industrial environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Liu; Qu, Yongzhi; Dhupia, Jaspreet Singh
The localized failures of gears introduce cyclic-transient impulses in the measured gearbox vibration signals. These impulses are usually identified from the sidebands around gear-mesh harmonics through the spectral analysis of cyclo-stationary signals. However, in practice, several high-powered applications of gearboxes like wind turbines are intrinsically characterized by nonstationary processes that blur the measured vibration spectra of a gearbox and deteriorate the efficacy of spectral diagnostic methods. Although order-tracking techniques have been proposed to improve the performance of spectral diagnosis for nonstationary signals measured in such applications, the required hardware for the measurement of rotational speed of these machines is oftenmore » unavailable in industrial settings. Moreover, existing tacho-less order-tracking approaches are usually limited by the high time-frequency resolution requirement, which is a prerequisite for the precise estimation of the instantaneous frequency. To address such issues, a novel fault-signature enhancement algorithm is proposed that can alleviate the spectral smearing without the need of rotational speed measurement. This proposed tacho-less diagnostic technique resamples the measured acceleration signal of the gearbox based on the optimal warping path evaluated from the fast dynamic time-warping algorithm, which aligns a filtered shaft rotational harmonic signal with respect to a reference signal assuming a constant shaft rotational speed estimated from the approximation of operational speed. The effectiveness of this method is validated using both simulated signals from a fixed-axis gear pair under nonstationary conditions and experimental measurements from a 750-kW planetary wind turbine gearbox on a dynamometer test rig. Lastly, the results demonstrate that the proposed algorithm can identify fault information from typical gearbox vibration measurements carried out in a resource-constrained industrial environment.« less
Hong, Liu; Qu, Yongzhi; Dhupia, Jaspreet Singh; ...
2017-02-27
The localized failures of gears introduce cyclic-transient impulses in the measured gearbox vibration signals. These impulses are usually identified from the sidebands around gear-mesh harmonics through the spectral analysis of cyclo-stationary signals. However, in practice, several high-powered applications of gearboxes like wind turbines are intrinsically characterized by nonstationary processes that blur the measured vibration spectra of a gearbox and deteriorate the efficacy of spectral diagnostic methods. Although order-tracking techniques have been proposed to improve the performance of spectral diagnosis for nonstationary signals measured in such applications, the required hardware for the measurement of rotational speed of these machines is oftenmore » unavailable in industrial settings. Moreover, existing tacho-less order-tracking approaches are usually limited by the high time-frequency resolution requirement, which is a prerequisite for the precise estimation of the instantaneous frequency. To address such issues, a novel fault-signature enhancement algorithm is proposed that can alleviate the spectral smearing without the need of rotational speed measurement. This proposed tacho-less diagnostic technique resamples the measured acceleration signal of the gearbox based on the optimal warping path evaluated from the fast dynamic time-warping algorithm, which aligns a filtered shaft rotational harmonic signal with respect to a reference signal assuming a constant shaft rotational speed estimated from the approximation of operational speed. The effectiveness of this method is validated using both simulated signals from a fixed-axis gear pair under nonstationary conditions and experimental measurements from a 750-kW planetary wind turbine gearbox on a dynamometer test rig. Lastly, the results demonstrate that the proposed algorithm can identify fault information from typical gearbox vibration measurements carried out in a resource-constrained industrial environment.« less
Zaugg, Serge; van der Schaar, Mike; Houégnigan, Ludwig; André, Michel
2013-02-01
The analysis of acoustic data from the ocean is a valuable tool to study free ranging cetaceans and anthropogenic noise. Due to the typically large volume of acquired data, there is a demand for automated analysis techniques. Many cetaceans produce acoustic pulses (echolocation clicks) with a pulse repetition interval (PRI) remaining nearly constant over several pulses. Analyzing these pulse trains is challenging because they are often interleaved. This article presents an algorithm that estimates a pulse's PRI with respect to neighboring pulses. It includes a deinterleaving step that operates via a spectral dissimilarity metric. The sperm whale (SW) produces trains with PRIs between 0.5 and 2 s. As a validation, the algorithm was used for the PRI-based identification of SW click trains with data from the NEMO-ONDE observatory that contained other pulsed sounds, mainly from ship propellers. Separation of files containing SW clicks with a medium and high signal to noise ratio from files containing other pulsed sounds gave an area under the receiver operating characteristic curve value of 0.96. This study demonstrates that PRI can be used for the automated identification of SW clicks and that deinterleaving via spectral dissimilarity contributes to algorithm performance.
Exploiting spectral content for image segmentation in GPR data
NASA Astrophysics Data System (ADS)
Wang, Patrick K.; Morton, Kenneth D., Jr.; Collins, Leslie M.; Torrione, Peter A.
2011-06-01
Ground-penetrating radar (GPR) sensors provide an effective means for detecting changes in the sub-surface electrical properties of soils, such as changes indicative of landmines or other buried threats. However, most GPR-based pre-screening algorithms only localize target responses along the surface of the earth, and do not provide information regarding an object's position in depth. As a result, feature extraction algorithms are forced to process data from entire cubes of data around pre-screener alarms, which can reduce feature fidelity and hamper performance. In this work, spectral analysis is investigated as a method for locating subsurface anomalies in GPR data. In particular, a 2-D spatial/frequency decomposition is applied to pre-screener flagged GPR B-scans. Analysis of these spatial/frequency regions suggests that aspects (e.g. moments, maxima, mode) of the frequency distribution of GPR energy can be indicative of the presence of target responses. After translating a GPR image to a function of the spatial/frequency distributions at each pixel, several image segmentation approaches can be applied to perform segmentation in this new transformed feature space. To illustrate the efficacy of the approach, a performance comparison between feature processing with and without the image segmentation algorithm is provided.
Comparative Analysis of Haar and Daubechies Wavelet for Hyper Spectral Image Classification
NASA Astrophysics Data System (ADS)
Sharif, I.; Khare, S.
2014-11-01
With the number of channels in the hundreds instead of in the tens Hyper spectral imagery possesses much richer spectral information than multispectral imagery. The increased dimensionality of such Hyper spectral data provides a challenge to the current technique for analyzing data. Conventional classification methods may not be useful without dimension reduction pre-processing. So dimension reduction has become a significant part of Hyper spectral image processing. This paper presents a comparative analysis of the efficacy of Haar and Daubechies wavelets for dimensionality reduction in achieving image classification. Spectral data reduction using Wavelet Decomposition could be useful because it preserves the distinction among spectral signatures. Daubechies wavelets optimally capture the polynomial trends while Haar wavelet is discontinuous and resembles a step function. The performance of these wavelets are compared in terms of classification accuracy and time complexity. This paper shows that wavelet reduction has more separate classes and yields better or comparable classification accuracy. In the context of the dimensionality reduction algorithm, it is found that the performance of classification of Daubechies wavelets is better as compared to Haar wavelet while Daubechies takes more time compare to Haar wavelet. The experimental results demonstrate the classification system consistently provides over 84% classification accuracy.
Grégori, Gérald; Rajwa, Bartek; Patsekin, Valery; Jones, James; Furuki, Motohiro; Yamamoto, Masanobu; Paul Robinson, J
2014-01-01
Hyperspectral cytometry is an emerging technology for single-cell analysis that combines ultrafast optical spectroscopy and flow cytometry. Spectral cytometry systems utilize diffraction gratings or prism-based monochromators to disperse fluorescence signals from multiple labels (organic dyes, nanoparticles, or fluorescent proteins) present in each analyzed bioparticle onto linear detector arrays such as multianode photomultipliers or charge-coupled device sensors. The resultant data, consisting of a series of characterizing every analyzed cell, are not compensated by employing the traditional cytometry approach, but rather are spectrally unmixed utilizing algorithms such as constrained Poisson regression or non-negative matrix factorization. Although implementations of spectral cytometry were envisioned as early as the 1980s, only recently has the development of highly sensitive photomultiplier tube arrays led to design and construction of functional prototypes and subsequently to introduction of commercially available systems. This chapter summarizes the historical efforts and work in the field of spectral cytometry performed at Purdue University Cytometry Laboratories and describes the technology developed by Sony Corporation that resulted in release of the first commercial spectral cytometry system-the Sony SP6800. A brief introduction to spectral data analysis is also provided, with emphasis on the differences between traditional polychromatic and spectral cytometry approaches.
A spectral, quasi-cylindrical and dispersion-free Particle-In-Cell algorithm
Lehe, Remi; Kirchen, Manuel; Andriyash, Igor A.; ...
2016-02-17
We propose a spectral Particle-In-Cell (PIC) algorithm that is based on the combination of a Hankel transform and a Fourier transform. For physical problems that have close-to-cylindrical symmetry, this algorithm can be much faster than full 3D PIC algorithms. In addition, unlike standard finite-difference PIC codes, the proposed algorithm is free of spurious numerical dispersion, in vacuum. This algorithm is benchmarked in several situations that are of interest for laser-plasma interactions. These benchmarks show that it avoids a number of numerical artifacts, that would otherwise affect the physics in a standard PIC algorithm - including the zero-order numerical Cherenkov effect.
Compact full-motion video hyperspectral cameras: development, image processing, and applications
NASA Astrophysics Data System (ADS)
Kanaev, A. V.
2015-10-01
Emergence of spectral pixel-level color filters has enabled development of hyper-spectral Full Motion Video (FMV) sensors operating in visible (EO) and infrared (IR) wavelengths. The new class of hyper-spectral cameras opens broad possibilities of its utilization for military and industry purposes. Indeed, such cameras are able to classify materials as well as detect and track spectral signatures continuously in real time while simultaneously providing an operator the benefit of enhanced-discrimination-color video. Supporting these extensive capabilities requires significant computational processing of the collected spectral data. In general, two processing streams are envisioned for mosaic array cameras. The first is spectral computation that provides essential spectral content analysis e.g. detection or classification. The second is presentation of the video to an operator that can offer the best display of the content depending on the performed task e.g. providing spatial resolution enhancement or color coding of the spectral analysis. These processing streams can be executed in parallel or they can utilize each other's results. The spectral analysis algorithms have been developed extensively, however demosaicking of more than three equally-sampled spectral bands has been explored scarcely. We present unique approach to demosaicking based on multi-band super-resolution and show the trade-off between spatial resolution and spectral content. Using imagery collected with developed 9-band SWIR camera we demonstrate several of its concepts of operation including detection and tracking. We also compare the demosaicking results to the results of multi-frame super-resolution as well as to the combined multi-frame and multiband processing.
A Subsystem Test Bed for Chinese Spectral Radioheliograph
NASA Astrophysics Data System (ADS)
Zhao, An; Yan, Yihua; Wang, Wei
2014-11-01
The Chinese Spectral Radioheliograph is a solar dedicated radio interferometric array that will produce high spatial resolution, high temporal resolution, and high spectral resolution images of the Sun simultaneously in decimetre and centimetre wave range. Digital processing of intermediate frequency signal is an important part in a radio telescope. This paper describes a flexible and high-speed digital down conversion system for the CSRH by applying complex mixing, parallel filtering, and extracting algorithms to process IF signal at the time of being designed and incorporates canonic-signed digit coding and bit-plane method to improve program efficiency. The DDC system is intended to be a subsystem test bed for simulation and testing for CSRH. Software algorithms for simulation and hardware language algorithms based on FPGA are written which use less hardware resources and at the same time achieve high performances such as processing high-speed data flow (1 GHz) with 10 MHz spectral resolution. An experiment with the test bed is illustrated by using geostationary satellite data observed on March 20, 2014. Due to the easy alterability of the algorithms on FPGA, the data can be recomputed with different digital signal processing algorithms for selecting optimum algorithm.
A real-time spectral mapper as an emerging diagnostic technology in biomedical sciences.
Epitropou, George; Kavvadias, Vassilis; Iliou, Dimitris; Stathopoulos, Efstathios; Balas, Costas
2013-01-01
Real time spectral imaging and mapping at video rates can have tremendous impact not only on diagnostic sciences but also on fundamental physiological problems. We report the first real-time spectral mapper based on the combination of snap-shot spectral imaging and spectral estimation algorithms. Performance evaluation revealed that six band imaging combined with the Wiener algorithm provided high estimation accuracy, with error levels lying within the experimental noise. High accuracy is accompanied with much faster, by 3 orders of magnitude, spectral mapping, as compared with scanning spectral systems. This new technology is intended to enable spectral mapping at nearly video rates in all kinds of dynamic bio-optical effects as well as in applications where the target-probe relative position is randomly and fast changing.
NASA Astrophysics Data System (ADS)
Yadav, Deepti; Arora, M. K.; Tiwari, K. C.; Ghosh, J. K.
2016-04-01
Hyperspectral imaging is a powerful tool in the field of remote sensing and has been used for many applications like mineral detection, detection of landmines, target detection etc. Major issues in target detection using HSI are spectral variability, noise, small size of the target, huge data dimensions, high computation cost, complex backgrounds etc. Many of the popular detection algorithms do not work for difficult targets like small, camouflaged etc. and may result in high false alarms. Thus, target/background discrimination is a key issue and therefore analyzing target's behaviour in realistic environments is crucial for the accurate interpretation of hyperspectral imagery. Use of standard libraries for studying target's spectral behaviour has limitation that targets are measured in different environmental conditions than application. This study uses the spectral data of the same target which is used during collection of the HSI image. This paper analyze spectrums of targets in a way that each target can be spectrally distinguished from a mixture of spectral data. Artificial neural network (ANN) has been used to identify the spectral range for reducing data and further its efficacy for improving target detection is verified. The results of ANN proposes discriminating band range for targets; these ranges were further used to perform target detection using four popular spectral matching target detection algorithm. Further, the results of algorithms were analyzed using ROC curves to evaluate the effectiveness of the ranges suggested by ANN over full spectrum for detection of desired targets. In addition, comparative assessment of algorithms is also performed using ROC.
NASA Astrophysics Data System (ADS)
Wang, Jinnian; Zheng, Lanfen; Tong, Qingxi
1998-08-01
In this paper, we reported some research result in applying hyperspectral remote sensing data in identification and classification of wetland plant species and associations. Hyperspectral data were acquired by Modular Airborne Imaging Spectrometer (MAIS) over Poyang Lake wetland, China. A derivative spectral matching algorithm was used in hyperspectral vegetation analysis. The field measurement spectra were as reference for derivative spectral matching. In the study area, seven wetland plant associations were identified and classified with overall average accuracy is 84.03%.
Hao, Yong; Sun, Xu-Dong; Yang, Qiang
2012-12-01
Variables selection strategy combined with local linear embedding (LLE) was introduced for the analysis of complex samples by using near infrared spectroscopy (NIRS). Three methods include Monte Carlo uninformation variable elimination (MCUVE), successive projections algorithm (SPA) and MCUVE connected with SPA were used for eliminating redundancy spectral variables. Partial least squares regression (PLSR) and LLE-PLSR were used for modeling complex samples. The results shown that MCUVE can both extract effective informative variables and improve the precision of models. Compared with PLSR models, LLE-PLSR models can achieve more accurate analysis results. MCUVE combined with LLE-PLSR is an effective modeling method for NIRS quantitative analysis.
The Raman spectrum character of skin tumor induced by UVB
NASA Astrophysics Data System (ADS)
Wu, Shulian; Hu, Liangjun; Wang, Yunxia; Li, Yongzeng
2016-03-01
In our study, the skin canceration processes induced by UVB were analyzed from the perspective of tissue spectrum. A home-made Raman spectral system with a millimeter order excitation laser spot size combined with a multivariate statistical analysis for monitoring the skin changed irradiated by UVB was studied and the discrimination were evaluated. Raman scattering signals of the SCC and normal skin were acquired. Spectral differences in Raman spectra were revealed. Linear discriminant analysis (LDA) based on principal component analysis (PCA) were employed to generate diagnostic algorithms for the classification of skin SCC and normal. The results indicated that Raman spectroscopy combined with PCA-LDA demonstrated good potential for improving the diagnosis of skin cancers.
A complex guided spectral transform Lanczos method for studying quantum resonance states
Yu, Hua-Gen
2014-12-28
A complex guided spectral transform Lanczos (cGSTL) algorithm is proposed to compute both bound and resonance states including energies, widths and wavefunctions. The algorithm comprises of two layers of complex-symmetric Lanczos iterations. A short inner layer iteration produces a set of complex formally orthogonal Lanczos (cFOL) polynomials. They are used to span the guided spectral transform function determined by a retarded Green operator. An outer layer iteration is then carried out with the transform function to compute the eigen-pairs of the system. The guided spectral transform function is designed to have the same wavefunctions as the eigenstates of the originalmore » Hamiltonian in the spectral range of interest. Therefore the energies and/or widths of bound or resonance states can be easily computed with their wavefunctions or by using a root-searching method from the guided spectral transform surface. The new cGSTL algorithm is applied to bound and resonance states of HO₂, and compared to previous calculations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, G A
2004-06-08
In general, the Phase Retrieval from Modulus problem is very difficult. In this report, we solve the difficult, but somewhat more tractable case in which we constrain the solution to a minimum phase reconstruction. We exploit the real-and imaginary part sufficiency properties of the Fourier and Hilbert Transforms of causal sequences to develop an algorithm for reconstructing spectral phase given only spectral modulus. The algorithm uses homeomorphic signal processing methods with the complex cepstrum. The formal problem of interest is: Given measurements of only the modulus {vert_bar}H(k){vert_bar} (no phase) of the Discrete Fourier Transform (DFT) of a real, finite-length, stable,more » causal time domain signal h(n), compute a minimum phase reconstruction {cflx h}(n) of the signal. Then compute the phase of {cflx h}(n) using a DFT, and exploit the result as an estimate of the phase of h(n). The development of the algorithm is quite involved, but the final algorithm and its implementation are very simple. This work was motivated by a Phase Retrieval from Modulus Problem that arose in LLNL Defense Sciences Engineering Division (DSED) projects in lightning protection for buildings. The measurements are limited to modulus-only spectra from a spectrum analyzer. However, it is desired to perform system identification on the building to compute impulse responses and transfer functions that describe the amount of lightning energy that will be transferred from the outside of the building to the inside. This calculation requires knowledge of the entire signals (both modulus and phase). The algorithm and software described in this report are proposed as an approach to phase retrieval that can be used for programmatic needs. This report presents a brief tutorial description of the mathematical problem and the derivation of the phase retrieval algorithm. The efficacy of the theory is demonstrated using simulated signals that meet the assumptions of the algorithm. We see that for the noiseless case, the reconstructions are extremely accurate. When moderate to heavy simulated white Gaussian noise was added, the algorithm performance remained reasonably robust, especially in the low frequency part of the spectrum, which is the part of most interest for lightning protection. Limitations of the algorithm include the following: (1) It does not account for noise in the given spectral modulus. Fortunately, the lightning protection signals of interest generally have a reasonably high signal-to-noise ratio (SNR). (2) The DFT length N must be even and larger than the length of the nonzero part of the measured signals. These constraints are simple to meet in practice. (3) Regardless of the properties of the actual signal h(n), the phase retrieval results are constrained to have the minimum phase property. In most problems of practical interest, these assumptions are very reasonable and probably valid. They are reasonable assumptions for Lightning Protection applications. Proposed future work includes (a) Evaluating the efficacy of the algorithm with real Lightning Protection signals from programmatic applications, (b) Performing a more rigorous analysis of noise effects, (c) Using the algorithm along with advanced system identification algorithms to estimate impulse responses and transfer functions, (d) Developing algorithms to deal with measured partial (truncated) spectral moduli, and (e) R & D of phase retrieval algorithms that specifically deal with general (not necessarily minimum phase) signals, and noisy spectral moduli.« less
NASA Astrophysics Data System (ADS)
Lin, Z.; Kim-Hak, D.; Popp, B. N.; Wallsgrove, N.; Kagawa-Viviani, A.; Johnson, J.
2017-12-01
Cavity ring-down spectroscopy (CRDS) is a technology based on the spectral absorption of gas molecules of interest at specific spectral regions. The CRDS technique enables the analysis of hydrogen and oxygen stable isotope ratios of water by directly measuring individual isotopologue absorption peaks such as H16OH, H18OH, and D16OH. Early work demonstrated that the accuracy of isotope analysis by CRDS and other laser-based absorption techniques could be compromised by spectral interference from organic compounds, in particular methanol and ethanol, which can be prevalent in ecologically-derived waters. There have been several methods developed by various research groups including Picarro to address the organic interference challenge. Here, we describe an organic fitter and a post-processing algorithm designed to improve the accuracy of the isotopic analysis of the "organic contaminated" water specifically for Picarro models L2130-i and L2140-i. To create the organic fitter, the absorption features of methanol around 7200 cm-1 were characterized and incorporated into spectral analysis. Since there was residual interference remaining after applying the organic fitter, a statistical model was also developed for post-processing correction. To evaluate the performance of the organic fitter and the postprocessing correction, we conducted controlled experiments on the L2130-i for two water samples with different isotope ratios blended with varying amounts of methanol (0-0.5%) and ethanol (0-5%). When the original fitter was not used for spectral analysis, the addition of 0.5% methanol changed the apparent isotopic composition of the water samples by +62‰ for δ18O values and +97‰ for δ2H values, and the addition of 5% ethanol changed the apparent isotopic composition by -0.5‰ for δ18O values and -3‰ for δ2H values. When the organic fitter was used for spectral analysis, the maximum methanol-induced errors were reduced to +4‰ for δ18O values and +5‰ for δ2H values, and the maximum ethanol-induced errors were unchanged. When the organic fitter was combined with the post-processing correction, up to 99.8% of the total methanol-induced errors and 96% of the total ethanol-induced errors could be corrected. The applicability of the algorithm to natural samples such as plant and soil waters will be investigated.
Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang
2014-06-01
We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models.
Application of Methods of Numerical Analysis to Physical and Engineering Data.
1980-10-15
directed algorithm would seem to be called for. However, 1(0) is itself a random process, making its gradient too unreliable for such a sensitive algorithm...radiation energy on the detector . Active laser systems, on the other hand, have created now the possibility for extremely narrow path band systems...emitted by the earth and its atmosphere. The broad spectral range was selected so that the field of view of the detector could be narrowed to obtain
Qualitative analysis of pure and adulterated canola oil via SIMCA
NASA Astrophysics Data System (ADS)
Basri, Katrul Nadia; Khir, Mohd Fared Abdul; Rani, Rozina Abdul; Sharif, Zaiton; Rusop, M.; Zoolfakar, Ahmad Sabirin
2018-05-01
This paper demonstrates the utilization of near infrared (NIR) spectroscopy to classify pure and adulterated sample of canola oil. Soft Independent Modeling Class Analogies (SIMCA) algorithm was implemented to discriminate the samples to its classes. Spectral data obtained was divided using Kennard Stone algorithm into training and validation dataset by a fixed ratio of 7:3. The model accuracy obtained based on the model built is 0.99 whereas the sensitivity and precision are 0.92 and 1.00. The result showed the classification model is robust to perform qualitative analysis of canola oil for future application.
NASA Astrophysics Data System (ADS)
Shecter, Liat; Oiknine, Yaniv; August, Isaac; Stern, Adrian
2017-09-01
Recently we presented a Compressive Sensing Miniature Ultra-spectral Imaging System (CS-MUSI)1 . This system consists of a single Liquid Crystal (LC) phase retarder as a spectral modulator and a gray scale sensor array to capture a multiplexed signal of the imaged scene. By designing the LC spectral modulator in compliance with the Compressive Sensing (CS) guidelines and applying appropriate algorithms we demonstrated reconstruction of spectral (hyper/ ultra) datacubes from an order of magnitude fewer samples than taken by conventional sensors. The LC modulator is designed to have an effective width of a few tens of micrometers, therefore it is prone to imperfections and spatial nonuniformity. In this work, we present the study of this nonuniformity and present a mathematical algorithm that allows the inference of the spectral transmission over the entire cell area from only a few calibration measurements.
Super-Nyquist shaping and processing technologies for high-spectral-efficiency optical systems
NASA Astrophysics Data System (ADS)
Jia, Zhensheng; Chien, Hung-Chang; Zhang, Junwen; Dong, Ze; Cai, Yi; Yu, Jianjun
2013-12-01
The implementations of super-Nyquist pulse generation, both in a digital field using a digital-to-analog converter (DAC) or an optical filter at transmitter side, are introduced. Three corresponding signal processing algorithms at receiver are presented and compared for high spectral-efficiency (SE) optical systems employing the spectral prefiltering. Those algorithms are designed for the mitigation towards inter-symbol-interference (ISI) and inter-channel-interference (ICI) impairments by the bandwidth constraint, including 1-tap constant modulus algorithm (CMA) and 3-tap maximum likelihood sequence estimation (MLSE), regular CMA and digital filter with 2-tap MLSE, and constant multi-modulus algorithm (CMMA) with 2-tap MLSE. The principles and prefiltering tolerance are given through numerical and experimental results.
Hyperspectral Imaging Sensors and the Marine Coastal Zone
NASA Technical Reports Server (NTRS)
Richardson, Laurie L.
2000-01-01
Hyperspectral imaging sensors greatly expand the potential of remote sensing to assess, map, and monitor marine coastal zones. Each pixel in a hyperspectral image contains an entire spectrum of information. As a result, hyperspectral image data can be processed in two very different ways: by image classification techniques, to produce mapped outputs of features in the image on a regional scale; and by use of spectral analysis of the spectral data embedded within each pixel of the image. The latter is particularly useful in marine coastal zones because of the spectral complexity of suspended as well as benthic features found in these environments. Spectral-based analysis of hyperspectral (AVIRIS) imagery was carried out to investigate a marine coastal zone of South Florida, USA. Florida Bay is a phytoplankton-rich estuary characterized by taxonomically distinct phytoplankton assemblages and extensive seagrass beds. End-member spectra were extracted from AVIRIS image data corresponding to ground-truth sample stations and well-known field sites. Spectral libraries were constructed from the AVIRIS end-member spectra and used to classify images using the Spectral Angle Mapper (SAM) algorithm, a spectral-based approach that compares the spectrum, in each pixel of an image with each spectrum in a spectral library. Using this approach different phytoplankton assemblages containing diatoms, cyanobacteria, and green microalgae, as well as benthic community (seagrasses), were mapped.
NASA Astrophysics Data System (ADS)
Shi, Cheng; Liu, Fang; Li, Ling-Ling; Hao, Hong-Xia
2014-01-01
The goal of pan-sharpening is to get an image with higher spatial resolution and better spectral information. However, the resolution of the pan-sharpened image is seriously affected by the thin clouds. For a single image, filtering algorithms are widely used to remove clouds. These kinds of methods can remove clouds effectively, but the detail lost in the cloud removal image is also serious. To solve this problem, a pan-sharpening algorithm to remove thin cloud via mask dodging and nonsampled shift-invariant shearlet transform (NSST) is proposed. For the low-resolution multispectral (LR MS) and high-resolution panchromatic images with thin clouds, a mask dodging method is used to remove clouds. For the cloud removal LR MS image, an adaptive principal component analysis transform is proposed to balance the spectral information and spatial resolution in the pan-sharpened image. Since the clouds removal process causes the detail loss problem, a weight matrix is designed to enhance the details of the cloud regions in the pan-sharpening process, but noncloud regions remain unchanged. And the details of the image are obtained by NSST. Experimental results over visible and evaluation metrics demonstrate that the proposed method can keep better spectral information and spatial resolution, especially for the images with thin clouds.
NASA Technical Reports Server (NTRS)
Tarabalka, Y.; Tilton, J. C.; Benediktsson, J. A.; Chanussot, J.
2012-01-01
The Hierarchical SEGmentation (HSEG) algorithm, which combines region object finding with region object clustering, has given good performances for multi- and hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. Two classification-based approaches for automatic marker selection are adapted and compared for this purpose. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. Three different implementations of the M-HSEG method are proposed and their performances in terms of classification accuracies are compared. The experimental results, presented for three hyperspectral airborne images, demonstrate that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for remote sensing image analysis.
NASA Technical Reports Server (NTRS)
Eren, K.
1980-01-01
The mathematical background in spectral analysis as applied to geodetic applications is summarized. The resolution (cut-off frequency) of the GEOS 3 altimeter data is examined by determining the shortest wavelength (corresponding to the cut-off frequency) recoverable. The data from some 18 profiles are used. The total power (variance) in the sea surface topography with respect to the reference ellipsoid as well as with respect to the GEM-9 surface is computed. A fast inversion algorithm for matrices of simple and block Toeplitz matrices and its application to least squares collocation is explained. This algorithm yields a considerable gain in computer time and storage in comparison with conventional least squares collocation. Frequency domain least squares collocation techniques are also introduced and applied to estimating gravity anomalies from GEOS 3 altimeter data. These techniques substantially reduce the computer time and requirements in storage associated with the conventional least squares collocation. Numerical examples given demonstrate the efficiency and speed of these techniques.
Spectral Regression Discriminant Analysis for Hyperspectral Image Classification
NASA Astrophysics Data System (ADS)
Pan, Y.; Wu, J.; Huang, H.; Liu, J.
2012-08-01
Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for Hyperspectral Image Classification. The manifold learning methods are popular for dimensionality reduction, such as Locally Linear Embedding, Isomap, and Laplacian Eigenmap. However, a disadvantage of many manifold learning methods is that their computations usually involve eigen-decomposition of dense matrices which is expensive in both time and memory. In this paper, we introduce a new dimensionality reduction method, called Spectral Regression Discriminant Analysis (SRDA). SRDA casts the problem of learning an embedding function into a regression framework, which avoids eigen-decomposition of dense matrices. Also, with the regression based framework, different kinds of regularizes can be naturally incorporated into our algorithm which makes it more flexible. It can make efficient use of data points to discover the intrinsic discriminant structure in the data. Experimental results on Washington DC Mall and AVIRIS Indian Pines hyperspectral data sets demonstrate the effectiveness of the proposed method.
Spec Tool; an online education and research resource
NASA Astrophysics Data System (ADS)
Maman, S.; Shenfeld, A.; Isaacson, S.; Blumberg, D. G.
2016-06-01
Education and public outreach (EPO) activities related to remote sensing, space, planetary and geo-physics sciences have been developed widely in the Earth and Planetary Image Facility (EPIF) at Ben-Gurion University of the Negev, Israel. These programs aim to motivate the learning of geo-scientific and technologic disciplines. For over the past decade, the facility hosts research and outreach activities for researchers, local community, school pupils, students and educators. As software and data are neither available nor affordable, the EPIF Spec tool was created as a web-based resource to assist in initial spectral analysis as a need for researchers and students. The tool is used both in the academic courses and in the outreach education programs and enables a better understanding of the theoretical data of spectroscopy and Imaging Spectroscopy in a 'hands-on' activity. This tool is available online and provides spectra visualization tools and basic analysis algorithms including Spectral plotting, Spectral angle mapping and Linear Unmixing. The tool enables to visualize spectral signatures from the USGS spectral library and additional spectra collected in the EPIF such as of dunes in southern Israel and from Turkmenistan. For researchers and educators, the tool allows loading collected samples locally for further analysis.
Futamura, Koji; Sekino, Masashi; Hata, Akihiro; Ikebuchi, Ryoyo; Nakanishi, Yasutaka; Egawa, Gyohei; Kabashima, Kenji; Watanabe, Takeshi; Furuki, Motohiro; Tomura, Michio
2015-09-01
Flow cytometric analysis with multicolor fluoroprobes is an essential method for detecting biological signatures of cells. Here, we present a new full-spectral flow cytometer (spectral-FCM). Unlike conventional flow cytometer, this spectral-FCM acquires the emitted fluorescence for all probes across the full-spectrum from each cell with 32 channels sequential PMT unit after dispersion with prism, and extracts the signals of each fluoroprobe based on the spectral shape of each fluoroprobe using unique algorithm in high speed, high sensitive, accurate, automatic and real-time. The spectral-FCM detects the continuous changes in emission spectra from green to red of the photoconvertible protein, KikGR with high-spectral resolution and separates spectrally-adjacent fluoroprobes, such as FITC (Emission peak (Em) 519 nm) and EGFP (Em 507 nm). Moreover, the spectral-FCM can measure and subtract autofluorescence of each cell providing increased signal-to-noise ratios and improved resolution of dim samples, which leads to a transformative technology for investigation of single cell state and function. These advances make it possible to perform 11-color fluorescence analysis to visualize movement of multilinage immune cells by using KikGR-expressing mice. Thus, the novel spectral flow cytometry improves the combinational use of spectrally-adjacent various FPs and multicolor fluorochromes in metabolically active cell for the investigation of not only the immune system but also other research and clinical fields of use. © 2015 International Society for Advancement of Cytometry.
Dysli, Chantal; Enzmann, Volker; Sznitman, Raphael; Zinkernagel, Martin S.
2015-01-01
Purpose Quantification of retinal layers using automated segmentation of optical coherence tomography (OCT) images allows for longitudinal studies of retinal and neurological disorders in mice. The purpose of this study was to compare the performance of automated retinal layer segmentation algorithms with data from manual segmentation in mice using the Spectralis OCT. Methods Spectral domain OCT images from 55 mice from three different mouse strains were analyzed in total. The OCT scans from 22 C57Bl/6, 22 BALBc, and 11 C3A.Cg-Pde6b+Prph2Rd2/J mice were automatically segmented using three commercially available automated retinal segmentation algorithms and compared to manual segmentation. Results Fully automated segmentation performed well in mice and showed coefficients of variation (CV) of below 5% for the total retinal volume. However, all three automated segmentation algorithms yielded much thicker total retinal thickness values compared to manual segmentation data (P < 0.0001) due to segmentation errors in the basement membrane. Conclusions Whereas the automated retinal segmentation algorithms performed well for the inner layers, the retinal pigmentation epithelium (RPE) was delineated within the sclera, leading to consistently thicker measurements of the photoreceptor layer and the total retina. Translational Relevance The introduction of spectral domain OCT allows for accurate imaging of the mouse retina. Exact quantification of retinal layer thicknesses in mice is important to study layers of interest under various pathological conditions. PMID:26336634
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soner Yorgun, M.; Rood, Richard B.
An object-based evaluation method using a pattern recognition algorithm (i.e., classification trees) is applied to the simulated orographic precipitation for idealized experimental setups using the National Center of Atmospheric Research (NCAR) Community Atmosphere Model (CAM) with the finite volume (FV) and the Eulerian spectral transform dynamical cores with varying resolutions. Daily simulations were analyzed and three different types of precipitation features were identified by the classification tree algorithm. The statistical characteristics of these features (i.e., maximum value, mean value, and variance) were calculated to quantify the difference between the dynamical cores and changing resolutions. Even with the simple and smoothmore » topography in the idealized setups, complexity in the precipitation fields simulated by the models develops quickly. The classification tree algorithm using objective thresholding successfully detected different types of precipitation features even as the complexity of the precipitation field increased. The results show that the complexity and the bias introduced in small-scale phenomena due to the spectral transform method of CAM Eulerian spectral dynamical core is prominent, and is an important reason for its dissimilarity from the FV dynamical core. The resolvable scales, both in horizontal and vertical dimensions, have significant effect on the simulation of precipitation. The results of this study also suggest that an efficient and informative study about the biases produced by GCMs should involve daily (or even hourly) output (rather than monthly mean) analysis over local scales.« less
Li, Zhao-Liang
2018-01-01
Few studies have examined hyperspectral remote-sensing image classification with type-II fuzzy sets. This paper addresses image classification based on a hyperspectral remote-sensing technique using an improved interval type-II fuzzy c-means (IT2FCM*) approach. In this study, in contrast to other traditional fuzzy c-means-based approaches, the IT2FCM* algorithm considers the ranking of interval numbers and the spectral uncertainty. The classification results based on a hyperspectral dataset using the FCM, IT2FCM, and the proposed improved IT2FCM* algorithms show that the IT2FCM* method plays the best performance according to the clustering accuracy. In this paper, in order to validate and demonstrate the separability of the IT2FCM*, four type-I fuzzy validity indexes are employed, and a comparative analysis of these fuzzy validity indexes also applied in FCM and IT2FCM methods are made. These four indexes are also applied into different spatial and spectral resolution datasets to analyze the effects of spectral and spatial scaling factors on the separability of FCM, IT2FCM, and IT2FCM* methods. The results of these validity indexes from the hyperspectral datasets show that the improved IT2FCM* algorithm have the best values among these three algorithms in general. The results demonstrate that the IT2FCM* exhibits good performance in hyperspectral remote-sensing image classification because of its ability to handle hyperspectral uncertainty. PMID:29373548
Soner Yorgun, M.; Rood, Richard B.
2016-11-11
An object-based evaluation method using a pattern recognition algorithm (i.e., classification trees) is applied to the simulated orographic precipitation for idealized experimental setups using the National Center of Atmospheric Research (NCAR) Community Atmosphere Model (CAM) with the finite volume (FV) and the Eulerian spectral transform dynamical cores with varying resolutions. Daily simulations were analyzed and three different types of precipitation features were identified by the classification tree algorithm. The statistical characteristics of these features (i.e., maximum value, mean value, and variance) were calculated to quantify the difference between the dynamical cores and changing resolutions. Even with the simple and smoothmore » topography in the idealized setups, complexity in the precipitation fields simulated by the models develops quickly. The classification tree algorithm using objective thresholding successfully detected different types of precipitation features even as the complexity of the precipitation field increased. The results show that the complexity and the bias introduced in small-scale phenomena due to the spectral transform method of CAM Eulerian spectral dynamical core is prominent, and is an important reason for its dissimilarity from the FV dynamical core. The resolvable scales, both in horizontal and vertical dimensions, have significant effect on the simulation of precipitation. The results of this study also suggest that an efficient and informative study about the biases produced by GCMs should involve daily (or even hourly) output (rather than monthly mean) analysis over local scales.« less
Open-path FTIR data reduction algorithm with atmospheric absorption corrections: the NONLIN code
NASA Astrophysics Data System (ADS)
Phillips, William; Russwurm, George M.
1999-02-01
This paper describes the progress made to date in developing, testing, and refining a data reduction computer code, NONLIN, that alleviates many of the difficulties experienced in the analysis of open path FTIR data. Among the problems that currently effect FTIR open path data quality are: the inability to obtain a true I degree or background, spectral interferences of atmospheric gases such as water vapor and carbon dioxide, and matching the spectral resolution and shift of the reference spectra to a particular field instrument. This algorithm is based on a non-linear fitting scheme and is therefore not constrained by many of the assumptions required for the application of linear methods such as classical least squares (CLS). As a result, a more realistic mathematical model of the spectral absorption measurement process can be employed in the curve fitting process. Applications of the algorithm have proven successful in circumventing open path data reduction problems. However, recent studies, by one of the authors, of the temperature and pressure effects on atmospheric absorption indicate there exist temperature and water partial pressure effects that should be incorporated into the NONLIN algorithm for accurate quantification of gas concentrations. This paper investigates the sources of these phenomena. As a result of this study a partial pressure correction has been employed in NONLIN computer code. Two typical field spectra are examined to determine what effect the partial pressure correction has on gas quantification.
A Spectral Algorithm for Solving the Relativistic Vlasov-Maxwell Equations
NASA Technical Reports Server (NTRS)
Shebalin, John V.
2001-01-01
A spectral method algorithm is developed for the numerical solution of the full six-dimensional Vlasov-Maxwell system of equations. Here, the focus is on the electron distribution function, with positive ions providing a constant background. The algorithm consists of a Jacobi polynomial-spherical harmonic formulation in velocity space and a trigonometric formulation in position space. A transform procedure is used to evaluate nonlinear terms. The algorithm is suitable for performing moderate resolution simulations on currently available supercomputers for both scientific and engineering applications.
Cloud classification from satellite data using a fuzzy sets algorithm: A polar example
NASA Technical Reports Server (NTRS)
Key, J. R.; Maslanik, J. A.; Barry, R. G.
1988-01-01
Where spatial boundaries between phenomena are diffuse, classification methods which construct mutually exclusive clusters seem inappropriate. The Fuzzy c-means (FCM) algorithm assigns each observation to all clusters, with membership values as a function of distance to the cluster center. The FCM algorithm is applied to AVHRR data for the purpose of classifying polar clouds and surfaces. Careful analysis of the fuzzy sets can provide information on which spectral channels are best suited to the classification of particular features, and can help determine likely areas of misclassification. General agreement in the resulting classes and cloud fraction was found between the FCM algorithm, a manual classification, and an unsupervised maximum likelihood classifier.
An efficient randomized algorithm for contact-based NMR backbone resonance assignment.
Kamisetty, Hetunandan; Bailey-Kellogg, Chris; Pandurangan, Gopal
2006-01-15
Backbone resonance assignment is a critical bottleneck in studies of protein structure, dynamics and interactions by nuclear magnetic resonance (NMR) spectroscopy. A minimalist approach to assignment, which we call 'contact-based', seeks to dramatically reduce experimental time and expense by replacing the standard suite of through-bond experiments with the through-space (nuclear Overhauser enhancement spectroscopy, NOESY) experiment. In the contact-based approach, spectral data are represented in a graph with vertices for putative residues (of unknown relation to the primary sequence) and edges for hypothesized NOESY interactions, such that observed spectral peaks could be explained if the residues were 'close enough'. Due to experimental ambiguity, several incorrect edges can be hypothesized for each spectral peak. An assignment is derived by identifying consistent patterns of edges (e.g. for alpha-helices and beta-sheets) within a graph and by mapping the vertices to the primary sequence. The key algorithmic challenge is to be able to uncover these patterns even when they are obscured by significant noise. This paper develops, analyzes and applies a novel algorithm for the identification of polytopes representing consistent patterns of edges in a corrupted NOESY graph. Our randomized algorithm aggregates simplices into polytopes and fixes inconsistencies with simple local modifications, called rotations, that maintain most of the structure already uncovered. In characterizing the effects of experimental noise, we employ an NMR-specific random graph model in proving that our algorithm gives optimal performance in expected polynomial time, even when the input graph is significantly corrupted. We confirm this analysis in simulation studies with graphs corrupted by up to 500% noise. Finally, we demonstrate the practical application of the algorithm on several experimental beta-sheet datasets. Our approach is able to eliminate a large majority of noise edges and to uncover large consistent sets of interactions. Our algorithm has been implemented in the platform-independent Python code. The software can be freely obtained for academic use by request from the authors.
Spectral imaging: principles and applications.
Garini, Yuval; Young, Ian T; McNamara, George
2006-08-01
Spectral imaging extends the capabilities of biological and clinical studies to simultaneously study multiple features such as organelles and proteins qualitatively and quantitatively. Spectral imaging combines two well-known scientific methodologies, namely spectroscopy and imaging, to provide a new advantageous tool. The need to measure the spectrum at each point of the image requires combining dispersive optics with the more common imaging equipment, and introduces constrains as well. The principles of spectral imaging and a few representative applications are described. Spectral imaging analysis is necessary because the complex data structure cannot be analyzed visually. A few of the algorithms are discussed with emphasis on the usage for different experimental modes (fluorescence and bright field). Finally, spectral imaging, like any method, should be evaluated in light of its advantages to specific applications, a selection of which is described. Spectral imaging is a relatively new technique and its full potential is yet to be exploited. Nevertheless, several applications have already shown its potential. (c) 2006 International Society for Analytical Cytology.
Algorithms for Solvents and Spectral Factors of Matrix Polynomials
1981-01-01
spectral factors of matrix polynomials LEANG S. SHIEHt, YIH T. TSAYt and NORMAN P. COLEMANt A generalized Newton method , based on the contracted gradient...of a matrix poly- nomial, is derived for solving the right (left) solvents and spectral factors of matrix polynomials. Two methods of selecting initial...estimates for rapid convergence of the newly developed numerical method are proposed. Also, new algorithms for solving complete sets of the right
NASA Astrophysics Data System (ADS)
Zhong, Jiaqi; Zeng, Cheng; Yuan, Yupeng; Zhang, Yuzhe; Zhang, Ye
2018-04-01
The aim of this paper is to present an explicit numerical algorithm based on improved spectral Galerkin method for solving the unsteady diffusion-convection-reaction equation. The principal characteristics of this approach give the explicit eigenvalues and eigenvectors based on the time-space separation method and boundary condition analysis. With the help of Fourier series and Galerkin truncation, we can obtain the finite-dimensional ordinary differential equations which facilitate the system analysis and controller design. By comparing with the finite element method, the numerical solutions are demonstrated via two examples. It is shown that the proposed method is effective.
Combing Visible and Infrared Spectral Tests for Dust Identification
NASA Technical Reports Server (NTRS)
Zhou, Yaping; Levy, Robert; Kleidman, Richard; Remer, Lorraine; Mattoo, Shana
2016-01-01
The MODIS Dark Target aerosol algorithm over Ocean (DT-O) uses spectral reflectance in the visible, near-IR and SWIR wavelengths to determine aerosol optical depth (AOD) and Angstrom Exponent (AE). Even though DT-O does have "dust-like" models to choose from, dust is not identified a priori before inversion. The "dust-like" models are not true "dust models" as they are spherical and do not have enough absorption at short wavelengths, so retrieved AOD and AE for dusty regions tends to be biased. The inference of "dust" is based on postprocessing criteria for AOD and AE by users. Dust aerosol has known spectral signatures in the near-UV (Deep blue), visible, and thermal infrared (TIR) wavelength regions. Multiple dust detection algorithms have been developed over the years with varying detection capabilities. Here, we test a few of these dust detection algorithms, to determine whether they can be useful to help inform the choices made by the DT-O algorithm. We evaluate the following methods: The multichannel imager (MCI) algorithm uses spectral threshold tests in (0.47, 0.64, 0.86, 1.38, 2.26, 3.9, 11.0, 12.0 micrometer) channels and spatial uniformity test [Zhao et al., 2010]. The NOAA dust aerosol index (DAI) uses spectral contrast in the blue channels (412nm and 440nm) [Ciren and Kundragunta, 2014]. The MCI is already included as tests within the "Wisconsin" (MOD35) Cloud mask algorithm.
Detecting and visualizing weak signatures in hyperspectral data
NASA Astrophysics Data System (ADS)
MacPherson, Duncan James
This thesis evaluates existing techniques for detecting weak spectral signatures from remotely sensed hyperspectral data. Algorithms are presented that successfully detect hard-to-find 'mystery' signatures in unknown cluttered backgrounds. The term 'mystery' is used to describe a scenario where the spectral target and background endmembers are unknown. Sub-Pixel analysis and background suppression are used to find deeply embedded signatures which can be less than 10% of the total signal strength. Existing 'mystery target' detection algorithms are derived and compared. Several techniques are shown to be superior both visually and quantitatively. Detection performance is evaluated using confidence metrics that are developed. A multiple algorithm approach is shown to improve detection confidence significantly. Although the research focuses on remote sensing applications, the algorithms presented can be applied to a wide variety of diverse fields such as medicine, law enforcement, manufacturing, earth science, food production, and astrophysics. The algorithms are shown to be general and can be applied to both the reflective and emissive parts of the electromagnetic spectrum. The application scope is a broad one and the final results open new opportunities for many specific applications including: land mine detection, pollution and hazardous waste detection, crop abundance calculations, volcanic activity monitoring, detecting diseases in food, automobile or airplane target recognition, cancer detection, mining operations, extracting galactic gas emissions, etc.
NASA Technical Reports Server (NTRS)
Li, Can; Krotkov, Nickolay A.; Carn, Simon; Zhang, Yan; Spurr, Robert J. D.; Joiner, Joanna
2017-01-01
Since the fall of 2004, the Ozone Monitoring Instrument (OMI) has been providing global monitoring of volcanic SO2 emissions, helping to understand their climate impacts and to mitigate aviation hazards. Here we introduce a new-generation OMI volcanic SO2 dataset based on a principal component analysis (PCA) retrieval technique. To reduce retrieval noise and artifacts as seen in the current operational linear fit (LF) algorithm, the new algorithm, OMSO2VOLCANO, uses characteristic features extracted directly from OMI radiances in the spectral fitting, thereby helping to minimize interferences from various geophysical processes (e.g., O3 absorption) and measurement details (e.g., wavelength shift). To solve the problem of low bias for large SO2 total columns in the LF product, the OMSO2VOLCANO algorithm employs a table lookup approach to estimate SO2 Jacobians (i.e., the instrument sensitivity to a perturbation in the SO2 column amount) and iteratively adjusts the spectral fitting window to exclude shorter wavelengths where the SO2 absorption signals are saturated. To first order, the effects of clouds and aerosols are accounted for using a simple Lambertian equivalent reflectivity approach. As with the LF algorithm, OMSO2VOLCANO provides total column retrievals based on a set of predefined SO2 profiles from the lower troposphere to the lower stratosphere, including a new profile peaked at 13 km for plumes in the upper troposphere. Examples given in this study indicate that the new dataset shows significant improvement over the LF product, with at least 50% reduction in retrieval noise over the remote Pacific. For large eruptions such as Kasatochi in 2008 (approximately 1700 kt total SO2/ and Sierra Negra in 2005 (greater than 1100DU maximum SO2), OMSO2VOLCANO generally agrees well with other algorithms that also utilize the full spectral content of satellite measurements, while the LF algorithm tends to underestimate SO2. We also demonstrate that, despite the coarser spatial and spectral resolution of the Suomi National Polar-orbiting Partnership (Suomi-NPP) Ozone Mapping and Profiler Suite (OMPS) instrument, application of the new PCA algorithm to OMPS data produces highly consistent retrievals between OMI and OMPS. The new PCA algorithm is therefore capable of continuing the volcanic SO2 data record well into the future using current and future hyperspectral UV satellite instruments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Dean J.; Harding, Lee T.
Isotope identification algorithms that are contained in the Gamma Detector Response and Analysis Software (GADRAS) can be used for real-time stationary measurement and search applications on platforms operating under Linux or Android operating sys-tems. Since the background radiation can vary considerably due to variations in natu-rally-occurring radioactive materials (NORM), spectral algorithms can be substantial-ly more sensitive to threat materials than search algorithms based strictly on count rate. Specific isotopes or interest can be designated for the search algorithm, which permits suppression of alarms for non-threatening sources, such as such as medical radionuclides. The same isotope identification algorithms that are usedmore » for search ap-plications can also be used to process static measurements. The isotope identification algorithms follow the same protocols as those used by the Windows version of GADRAS, so files that are created under the Windows interface can be copied direct-ly to processors on fielded sensors. The analysis algorithms contain provisions for gain adjustment and energy lineariza-tion, which enables direct processing of spectra as they are recorded by multichannel analyzers. Gain compensation is performed by utilizing photopeaks in background spectra. Incorporation of this energy calibration tasks into the analysis algorithm also eliminates one of the more difficult challenges associated with development of radia-tion detection equipment.« less
NASA Astrophysics Data System (ADS)
Chen, Gang; Metz, Margaret R.; Rizzo, David M.; Dillon, Whalen W.; Meentemeyer, Ross K.
2015-04-01
Forest ecosystems are subject to a variety of disturbances with increasing intensities and frequencies, which may permanently change the trajectories of forest recovery and disrupt the ecosystem services provided by trees. Fire and invasive species, especially exotic disease-causing pathogens and insects, are examples of disturbances that together could pose major threats to forest health. This study examines the impacts of fire and exotic disease (sudden oak death) on forests, with an emphasis on the assessment of post-fire burn severity in a forest where trees have experienced three stages of disease progression pre-fire: early-stage (trees retaining dried foliage and fine twigs), middle-stage (trees losing fine crown fuels), and late-stage (trees falling down). The research was conducted by applying Geographic Object-Based Image Analysis (GEOBIA) to MASTER airborne images that were acquired immediately following the fire for rapid assessment and contained both high-spatial (4 m) and high-spectral (50 bands) resolutions. Although GEOBIA has gradually become a standard tool for analyzing high-spatial resolution imagery, high-spectral resolution data (dozens to hundreds of bands) can dramatically reduce computation efficiency in the process of segmentation and object-based variable extraction, leading to complicated variable selection for succeeding modeling. Hence, we also assessed two widely used band reduction algorithms, PCA (principal component analysis) and MNF (minimum noise fraction), for the delineation of image objects and the subsequent performance of burn severity models using either PCA or MNF derived variables. To increase computation efficiency, only the top 5 PCA and MNF and top 10 PCA and MNF components were evaluated, which accounted for 10% and 20% of the total number of the original 50 spectral bands, respectively. Results show that if no band reduction was applied the models developed for the three stages of disease progression had relatively similar performance, where both spectral responses and texture contributed to burn assessments. However, the application of PCA and MNF introduced much greater variation among models across the three stages. For the early-stage disease progression, neither band reduction algorithms improved or retained the accuracy of burn severity modeling (except for the use of 10 MNF components). Compared to the no-band-reduction scenario, band reduction led to a greater level of overestimation of low-degree burns and underestimation of medium-degree burns, suggesting that the spectral variation removed by PCA and MNF was vital for distinguishing between the spectral reflectance from disease-induced dried crowns (still retaining high structural complexity) and fire ash. For the middle-stage, both algorithms improved the model R2 values by 2-37%, while the late-stage models had comparable or better performance to those using the original 50 spectral bands. This could be explained by the loss of tree crowns enabling better signal penetration, thus leading to reduced spectral variation from canopies. Hence, spectral bands containing a high degree of random noise were correctly removed by the band reduction algorithms. Compared to the middle-stage, the late-stage forest stands were covered by large piles of fallen trees and branches, resulting in higher variability of MASTER imagery. The ability of band reduction to improve the model performance for these late-stage forest stands was reduced, because the valuable spectral variation representing the actual late-stage forest status was partially removed by both algorithms as noise. Our results indicate that PCA and MNF are promising for balancing computation efficiency and the performance of burn severity models in forest stands subject to the middle and late stages of sudden oak death disease progression. Compared to PCA, MNF dramatically reduced image spectral variation, generating larger image objects with less complexity of object shapes. Whereas, PCA-based models delivered superior performance in most evaluated cases suggesting that some key spectral variability contributing to the accuracy of burn severity models in diseased forests may have been removed together with true spectral noise through MNF transformations.
Hyperspectral Image Denoising Using a Nonlocal Spectral Spatial Principal Component Analysis
NASA Astrophysics Data System (ADS)
Li, D.; Xu, L.; Peng, J.; Ma, J.
2018-04-01
Hyperspectral images (HSIs) denoising is a critical research area in image processing duo to its importance in improving the quality of HSIs, which has a negative impact on object detection and classification and so on. In this paper, we develop a noise reduction method based on principal component analysis (PCA) for hyperspectral imagery, which is dependent on the assumption that the noise can be removed by selecting the leading principal components. The main contribution of paper is to introduce the spectral spatial structure and nonlocal similarity of the HSIs into the PCA denoising model. PCA with spectral spatial structure can exploit spectral correlation and spatial correlation of HSI by using 3D blocks instead of 2D patches. Nonlocal similarity means the similarity between the referenced pixel and other pixels in nonlocal area, where Mahalanobis distance algorithm is used to estimate the spatial spectral similarity by calculating the distance in 3D blocks. The proposed method is tested on both simulated and real hyperspectral images, the results demonstrate that the proposed method is superior to several other popular methods in HSI denoising.
Detection of illicit substances in fingerprints by infrared spectral imaging.
Ng, Ping Hei Ronnie; Walker, Sarah; Tahtouh, Mark; Reedy, Brian
2009-08-01
FTIR and Raman spectral imaging can be used to simultaneously image a latent fingerprint and detect exogenous substances deposited within it. These substances might include drugs of abuse or traces of explosives or gunshot residue. In this work, spectral searching algorithms were tested for their efficacy in finding targeted substances deposited within fingerprints. "Reverse" library searching, where a large number of possibly poor-quality spectra from a spectral image are searched against a small number of high-quality reference spectra, poses problems for common search algorithms as they are usually implemented. Out of a range of algorithms which included conventional Euclidean distance searching, the spectral angle mapper (SAM) and correlation algorithms gave the best results when used with second-derivative image and reference spectra. All methods tested gave poorer performances with first derivative and undifferentiated spectra. In a search against a caffeine reference, the SAM and correlation methods were able to correctly rank a set of 40 confirmed but poor-quality caffeine spectra at the top of a dataset which also contained 4,096 spectra from an image of an uncontaminated latent fingerprint. These methods also successfully and individually detected aspirin, diazepam and caffeine that had been deposited together in another fingerprint, and they did not indicate any of these substances as a match in a search for another substance which was known not to be present. The SAM was used to successfully locate explosive components in fingerprints deposited on silicon windows. The potential of other spectral searching algorithms used in the field of remote sensing is considered, and the applicability of the methods tested in this work to other modes of spectral imaging is discussed.
Anatomy-Based Algorithms for Detecting Oral Cancer Using Reflectance and Fluorescence Spectroscopy
McGee, Sasha; Mardirossian, Vartan; Elackattu, Alphi; Mirkovic, Jelena; Pistey, Robert; Gallagher, George; Kabani, Sadru; Yu, Chung-Chieh; Wang, Zimmern; Badizadegan, Kamran; Grillone, Gregory; Feld, Michael S.
2010-01-01
Objectives We used reflectance and fluorescence spectroscopy to noninvasively and quantitatively distinguish benign from dysplastic/malignant oral lesions. We designed diagnostic algorithms to account for differences in the spectral properties among anatomic sites (gingiva, buccal mucosa, etc). Methods In vivo reflectance and fluorescence spectra were collected from 71 patients with oral lesions. The tissue was then biopsied and the specimen evaluated by histopathology. Quantitative parameters related to tissue morphology and biochemistry were extracted from the spectra. Diagnostic algorithms specific for combinations of sites with similar spectral properties were developed. Results Discrimination of benign from dysplastic/malignant lesions was most successful when algorithms were designed for individual sites (area under the receiver operator characteristic curve [ROC-AUC], 0.75 for the lateral surface of the tongue) and was least accurate when all sites were combined (ROC-AUC, 0.60). The combination of sites with similar spectral properties (floor of mouth and lateral surface of the tongue) yielded an ROC-AUC of 0.71. Conclusions Accurate spectroscopic detection of oral disease must account for spectral variations among anatomic sites. Anatomy-based algorithms for single sites or combinations of sites demonstrated good diagnostic performance in distinguishing benign lesions from dysplastic/malignant lesions and consistently performed better than algorithms developed for all sites combined. PMID:19999369
NASA Astrophysics Data System (ADS)
Piretzidis, Dimitrios; Sideris, Michael G.
2017-09-01
Filtering and signal processing techniques have been widely used in the processing of satellite gravity observations to reduce measurement noise and correlation errors. The parameters and types of filters used depend on the statistical and spectral properties of the signal under investigation. Filtering is usually applied in a non-real-time environment. The present work focuses on the implementation of an adaptive filtering technique to process satellite gravity gradiometry data for gravity field modeling. Adaptive filtering algorithms are commonly used in communication systems, noise and echo cancellation, and biomedical applications. Two independent studies have been performed to introduce adaptive signal processing techniques and test the performance of the least mean-squared (LMS) adaptive algorithm for filtering satellite measurements obtained by the gravity field and steady-state ocean circulation explorer (GOCE) mission. In the first study, a Monte Carlo simulation is performed in order to gain insights about the implementation of the LMS algorithm on data with spectral behavior close to that of real GOCE data. In the second study, the LMS algorithm is implemented on real GOCE data. Experiments are also performed to determine suitable filtering parameters. Only the four accurate components of the full GOCE gravity gradient tensor of the disturbing potential are used. The characteristics of the filtered gravity gradients are examined in the time and spectral domain. The obtained filtered GOCE gravity gradients show an agreement of 63-84 mEötvös (depending on the gravity gradient component), in terms of RMS error, when compared to the gravity gradients derived from the EGM2008 geopotential model. Spectral-domain analysis of the filtered gradients shows that the adaptive filters slightly suppress frequencies in the bandwidth of approximately 10-30 mHz. The limitations of the adaptive LMS algorithm are also discussed. The tested filtering algorithm can be connected to and employed in the first computational steps of the space-wise approach, where a time-wise Wiener filter is applied at the first stage of GOCE gravity gradient filtering. The results of this work can be extended to using other adaptive filtering algorithms, such as the recursive least-squares and recursive least-squares lattice filters.
Mirapeix, J; Cobo, A; González, D A; López-Higuera, J M
2007-02-19
A new plasma spectroscopy analysis technique based on the generation of synthetic spectra by means of optimization processes is presented in this paper. The technique has been developed for its application in arc-welding quality assurance. The new approach has been checked through several experimental tests, yielding results in reasonably good agreement with the ones offered by the traditional spectroscopic analysis technique.
Broadband Gerchberg-Saxton algorithm for freeform diffractive spectral filter design.
Vorndran, Shelby; Russo, Juan M; Wu, Yuechen; Pelaez, Silvana Ayala; Kostuk, Raymond K
2015-11-30
A multi-wavelength expansion of the Gerchberg-Saxton (GS) algorithm is developed to design and optimize a surface relief Diffractive Optical Element (DOE). The DOE simultaneously diffracts distinct wavelength bands into separate target regions. A description of the algorithm is provided, and parameters that affect filter performance are examined. Performance is based on the spectral power collected within specified regions on a receiver plane. The modified GS algorithm is used to design spectrum splitting optics for CdSe and Si photovoltaic (PV) cells. The DOE has average optical efficiency of 87.5% over the spectral bands of interest (400-710 nm and 710-1100 nm). Simulated PV conversion efficiency is 37.7%, which is 29.3% higher than the efficiency of the better performing PV cell without spectrum splitting optics.
NASA Astrophysics Data System (ADS)
Toadere, Florin
2017-12-01
A spectral image processing algorithm that allows the illumination of the scene with different illuminants together with the reconstruction of the scene's reflectance is presented. Color checker spectral image and CIE A (warm light 2700 K), D65 (cold light 6500 K) and Cree TW Series LED T8 (4000 K) are employed for scene illumination. Illuminants used in the simulations have different spectra and, as a result of their illumination, the colors of the scene change. The influence of the illuminants on the reconstruction of the scene's reflectance is estimated. Demonstrative images and reflectance showing the operation of the algorithm are illustrated.
A SPECTRAL GRAPH APPROACH TO DISCOVERING GENETIC ANCESTRY1
Lee, Ann B.; Luca, Diana; Roeder, Kathryn
2010-01-01
Mapping human genetic variation is fundamentally interesting in fields such as anthropology and forensic inference. At the same time, patterns of genetic diversity confound efforts to determine the genetic basis of complex disease. Due to technological advances, it is now possible to measure hundreds of thousands of genetic variants per individual across the genome. Principal component analysis (PCA) is routinely used to summarize the genetic similarity between subjects. The eigenvectors are interpreted as dimensions of ancestry. We build on this idea using a spectral graph approach. In the process we draw on connections between multidimensional scaling and spectral kernel methods. Our approach, based on a spectral embedding derived from the normalized Laplacian of a graph, can produce more meaningful delineation of ancestry than by using PCA. The method is stable to outliers and can more easily incorporate different similarity measures of genetic data than PCA. We illustrate a new algorithm for genetic clustering and association analysis on a large, genetically heterogeneous sample. PMID:20689656
NASA Astrophysics Data System (ADS)
Yang, Tao; Peng, Jing-xiao; Ho, Ho-pui; Song, Chun-yuan; Huang, Xiao-li; Zhu, Yong-yuan; Li, Xing-ao; Huang, Wei
2018-01-01
By using a preaggregated silver nanoparticle monolayer film and an infrared sensor card, we demonstrate a miniature spectrometer design that covers a broad wavelength range from visible to infrared with high spectral resolution. The spectral contents of an incident probe beam are reconstructed by solving a matrix equation with a smoothing simulated annealing algorithm. The proposed spectrometer offers significant advantages over current instruments that are based on Fourier transform and grating dispersion, in terms of size, resolution, spectral range, cost and reliability. The spectrometer contains three components, which are used for dispersion, frequency conversion and detection. Disordered silver nanoparticles in dispersion component reduce the fabrication complexity. An infrared sensor card in the conversion component broaden the operational spectral range of the system into visible and infrared bands. Since the CCD used in the detection component provides very large number of intensity measurements, one can reconstruct the final spectrum with high resolution. An additional feature of our algorithm for solving the matrix equation, which is suitable for reconstructing both broadband and narrowband signals, we have adopted a smoothing step based on a simulated annealing algorithm. This algorithm improve the accuracy of the spectral reconstruction.
NASA Astrophysics Data System (ADS)
Yi, Cancan; Lv, Yong; Xiao, Han; Ke, Ke; Yu, Xun
2017-12-01
For laser-induced breakdown spectroscopy (LIBS) quantitative analysis technique, baseline correction is an essential part for the LIBS data preprocessing. As the widely existing cases, the phenomenon of baseline drift is generated by the fluctuation of laser energy, inhomogeneity of sample surfaces and the background noise, which has aroused the interest of many researchers. Most of the prevalent algorithms usually need to preset some key parameters, such as the suitable spline function and the fitting order, thus do not have adaptability. Based on the characteristics of LIBS, such as the sparsity of spectral peaks and the low-pass filtered feature of baseline, a novel baseline correction and spectral data denoising method is studied in this paper. The improved technology utilizes convex optimization scheme to form a non-parametric baseline correction model. Meanwhile, asymmetric punish function is conducted to enhance signal-noise ratio (SNR) of the LIBS signal and improve reconstruction precision. Furthermore, an efficient iterative algorithm is applied to the optimization process, so as to ensure the convergence of this algorithm. To validate the proposed method, the concentration analysis of Chromium (Cr),Manganese (Mn) and Nickel (Ni) contained in 23 certified high alloy steel samples is assessed by using quantitative models with Partial Least Squares (PLS) and Support Vector Machine (SVM). Because there is no prior knowledge of sample composition and mathematical hypothesis, compared with other methods, the method proposed in this paper has better accuracy in quantitative analysis, and fully reflects its adaptive ability.
Tensor Rank Preserving Discriminant Analysis for Facial Recognition.
Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo
2017-10-12
Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.
Q estimation of seismic data using the generalized S-transform
NASA Astrophysics Data System (ADS)
Hao, Yaju; Wen, Xiaotao; Zhang, Bo; He, Zhenhua; Zhang, Rui; Zhang, Jinming
2016-12-01
Quality factor, Q, is a parameter that characterizes the energy dissipation during seismic wave propagation. The reservoir pore is one of the main factors that affect the value of Q. Especially, when pore space is filled with oil or gas, the rock usually exhibits a relative low Q value. Such a low Q value has been used as a direct hydrocarbon indicator by many researchers. The conventional Q estimation method based on spectral ratio suffers from the problem of waveform tuning; hence, many researchers have introduced time-frequency analysis techniques to tackle this problem. Unfortunately, the window functions adopted in time-frequency analysis algorithms such as continuous wavelet transform (CWT) and S-transform (ST) contaminate the amplitude spectra because the seismic signal is multiplied by the window functions during time-frequency decomposition. The basic assumption of the spectral ratio method is that there is a linear relationship between natural logarithmic spectral ratio and frequency. However, this assumption does not hold if we take the influence of window functions into consideration. In this paper, we first employ a recently developed two-parameter generalized S-transform (GST) to obtain the time-frequency spectra of seismic traces. We then deduce the non-linear relationship between natural logarithmic spectral ratio and frequency. Finally, we obtain a linear relationship between natural logarithmic spectral ratio and a newly defined parameter γ by ignoring the negligible second order term. The gradient of this linear relationship is 1/Q. Here, the parameter γ is a function of frequency and source wavelet. Numerical examples for VSP and post-stack reflection data confirm that our algorithm is capable of yielding accurate results. The Q-value results estimated from field data acquired in western China show reasonable comparison with oil-producing well location.
Speech Enhancement, Gain, and Noise Spectrum Adaptation Using Approximate Bayesian Estimation
Hao, Jiucang; Attias, Hagai; Nagarajan, Srikantan; Lee, Te-Won; Sejnowski, Terrence J.
2010-01-01
This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback–Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation–maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion. PMID:20428253
Improved Cloud and Snow Screening in MAIAC Aerosol Retrievals Using Spectral and Spatial Analysis
NASA Technical Reports Server (NTRS)
Lyapustin, A.; Wang, Y.; Laszlo, I.; Kokrkin, S.
2012-01-01
An improved cloud/snow screening technique in the Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm is described. It is implemented as part of MAIAC aerosol retrievals based on analysis of spectral residuals and spatial variability. Comparisons with AERONET aerosol observations and a large-scale MODIS data analysis show strong suppression of aerosol optical thickness outliers due to unresolved clouds and snow. At the same time, the developed filter does not reduce the aerosol retrieval capability at high 1 km resolution in strongly inhomogeneous environments, such as near centers of the active fires. Despite significant improvement, the optical depth outliers in high spatial resolution data are and will remain the problem to be addressed by the application-dependent specialized filtering techniques.
Algorithm 971: An Implementation of a Randomized Algorithm for Principal Component Analysis
LI, HUAMIN; LINDERMAN, GEORGE C.; SZLAM, ARTHUR; STANTON, KELLY P.; KLUGER, YUVAL; TYGERT, MARK
2017-01-01
Recent years have witnessed intense development of randomized methods for low-rank approximation. These methods target principal component analysis and the calculation of truncated singular value decompositions. The present article presents an essentially black-box, foolproof implementation for Mathworks’ MATLAB, a popular software platform for numerical computation. As illustrated via several tests, the randomized algorithms for low-rank approximation outperform or at least match the classical deterministic techniques (such as Lanczos iterations run to convergence) in basically all respects: accuracy, computational efficiency (both speed and memory usage), ease-of-use, parallelizability, and reliability. However, the classical procedures remain the methods of choice for estimating spectral norms and are far superior for calculating the least singular values and corresponding singular vectors (or singular subspaces). PMID:28983138
Towards operational multisensor registration
NASA Technical Reports Server (NTRS)
Rignot, Eric J. M.; Kwok, Ronald; Curlander, John C.
1991-01-01
To use data from a number of different remote sensors in a synergistic manner, a multidimensional analysis of the data is necessary. However, prior to this analysis, processing to correct for the systematic geometric distortion characteristic of each sensor is required. Furthermore, the registration process must be fully automated to handle a large volume of data and high data rates. A conceptual approach towards an operational multisensor registration algorithm is presented. The performance requirements of the algorithm are first formulated given the spatially, temporally, and spectrally varying factors that influence the image characteristics and the science requirements of various applications. Several registration techniques that fit within the structure of this algorithm are also presented. Their performance was evaluated using a multisensor test data set assembled from LANDSAT TM, SEASAT, SIR-B, Thermal Infrared Multispectral Scanner (TIMS), and SPOT sensors.
NASA Astrophysics Data System (ADS)
Padma, S.; Sanjeevi, S.
2014-12-01
This paper proposes a novel hyperspectral matching algorithm by integrating the stochastic Jeffries-Matusita measure (JM) and the deterministic Spectral Angle Mapper (SAM), to accurately map the species and the associated landcover types of the mangroves of east coast of India using hyperspectral satellite images. The JM-SAM algorithm signifies the combination of a qualitative distance measure (JM) and a quantitative angle measure (SAM). The spectral capabilities of both the measures are orthogonally projected using the tangent and sine functions to result in the combined algorithm. The developed JM-SAM algorithm is implemented to discriminate the mangrove species and the landcover classes of Pichavaram (Tamil Nadu), Muthupet (Tamil Nadu) and Bhitarkanika (Odisha) mangrove forests along the Eastern Indian coast using the Hyperion image dat asets that contain 242 bands. The developed algorithm is extended in a supervised framework for accurate classification of the Hyperion image. The pixel-level matching performance of the developed algorithm is assessed by the Relative Spectral Discriminatory Probability (RSDPB) and Relative Spectral Discriminatory Entropy (RSDE) measures. From the values of RSDPB and RSDE, it is inferred that hybrid JM-SAM matching measure results in improved discriminability of the mangrove species and the associated landcover types than the individual SAM and JM algorithms. This performance is reflected in the classification accuracies of species and landcover map of Pichavaram mangrove ecosystem. Thus, the JM-SAM (TAN) matching algorithm yielded an accuracy better than SAM and JM measures at an average difference of 13.49 %, 7.21 % respectively, followed by JM-SAM (SIN) at 12.06%, 5.78% respectively. Similarly, in the case of Muthupet, JM-SAM (TAN) yielded an increased accuracy than SAM and JM measures at an average difference of 12.5 %, 9.72 % respectively, followed by JM-SAM (SIN) at 8.34 %, 5.55% respectively. For Bhitarkanika, the combined JM-SAM (TAN) and (SIN) measures improved the performance of individual SAM by (16.1 %, 15%) and of JM by (10.3%, 9.2%) respectively.
An Introduction to Computational Physics
NASA Astrophysics Data System (ADS)
Pang, Tao
2010-07-01
Preface to first edition; Preface; Acknowledgements; 1. Introduction; 2. Approximation of a function; 3. Numerical calculus; 4. Ordinary differential equations; 5. Numerical methods for matrices; 6. Spectral analysis; 7. Partial differential equations; 8. Molecular dynamics simulations; 9. Modeling continuous systems; 10. Monte Carlo simulations; 11. Genetic algorithm and programming; 12. Numerical renormalization; References; Index.
Focus: a robust workflow for one-dimensional NMR spectral analysis.
Alonso, Arnald; Rodríguez, Miguel A; Vinaixa, Maria; Tortosa, Raül; Correig, Xavier; Julià, Antonio; Marsal, Sara
2014-01-21
One-dimensional (1)H NMR represents one of the most commonly used analytical techniques in metabolomic studies. The increase in the number of samples analyzed as well as the technical improvements involving instrumentation and spectral acquisition demand increasingly accurate and efficient high-throughput data processing workflows. We present FOCUS, an integrated and innovative methodology that provides a complete data analysis workflow for one-dimensional NMR-based metabolomics. This tool will allow users to easily obtain a NMR peak feature matrix ready for chemometric analysis as well as metabolite identification scores for each peak that greatly simplify the biological interpretation of the results. The algorithm development has been focused on solving the critical difficulties that appear at each data processing step and that can dramatically affect the quality of the results. As well as method integration, simplicity has been one of the main objectives in FOCUS development, requiring very little user input to perform accurate peak alignment, peak picking, and metabolite identification. The new spectral alignment algorithm, RUNAS, allows peak alignment with no need of a reference spectrum, and therefore, it reduces the bias introduced by other alignment approaches. Spectral alignment has been tested against previous methodologies obtaining substantial improvements in the case of moderate or highly unaligned spectra. Metabolite identification has also been significantly improved, using the positional and correlation peak patterns in contrast to a reference metabolite panel. Furthermore, the complete workflow has been tested using NMR data sets from 60 human urine samples and 120 aqueous liver extracts, reaching a successful identification of 42 metabolites from the two data sets. The open-source software implementation of this methodology is available at http://www.urr.cat/FOCUS.
Characterization of a digital camera as an absolute tristimulus colorimeter
NASA Astrophysics Data System (ADS)
Martinez-Verdu, Francisco; Pujol, Jaume; Vilaseca, Meritxell; Capilla, Pascual
2003-01-01
An algorithm is proposed for the spectral and colorimetric characterization of digital still cameras (DSC) which allows to use them as tele-colorimeters with CIE-XYZ color output, in cd/m2. The spectral characterization consists of the calculation of the color-matching functions from the previously measured spectral sensitivities. The colorimetric characterization consists of transforming the RGB digital data into absolute tristimulus values CIE-XYZ (in cd/m2) under variable and unknown spectroradiometric conditions. Thus, at the first stage, a gray balance has been applied over the RGB digital data to convert them into RGB relative colorimetric values. At a second stage, an algorithm of luminance adaptation vs. lens aperture has been inserted in the basic colorimetric profile. Capturing the ColorChecker chart under different light sources, the DSC color analysis accuracy indexes, both in a raw state and with the corrections from a linear model of color correction, have been evaluated using the Pointer'86 color reproduction index with the unrelated Hunt'91 color appearance model. The results indicate that our digital image capture device, in raw performance, lightens and desaturates the colors.
Zhang, Yong; Li, Yuan; Rong, Zhi-Guo
2010-06-01
Remote sensors' channel spectral response function (SRF) was one of the key factors to influence the quantitative products' inversion algorithm, accuracy and the geophysical characteristics. Aiming at the adjustments of FY-2E's split window channels' SRF, detailed comparisons between the FY-2E and FY-2C corresponding channels' SRF differences were carried out based on three data collections: the NOAA AVHRR corresponding channels' calibration look up tables, field measured water surface radiance and atmospheric profiles at Lake Qinghai and radiance calculated from the PLANK function within all dynamic range of FY-2E/C. The results showed that the adjustments of FY-2E's split window channels' SRF would result in the spectral range's movements and influence the inversion algorithms of some ground quantitative products. On the other hand, these adjustments of FY-2E SRFs would increase the brightness temperature differences between FY-2E's two split window channels within all dynamic range relative to FY-2C's. This would improve the inversion ability of FY-2E's split window channels.
Wide-band array signal processing via spectral smoothing
NASA Technical Reports Server (NTRS)
Xu, Guanghan; Kailath, Thomas
1989-01-01
A novel algorithm for the estimation of direction-of-arrivals (DOA) of multiple wide-band sources via spectral smoothing is presented. The proposed algorithm does not require an initial DOA estimate or a specific signal model. The advantages of replacing the MUSIC search with an ESPRIT search are discussed.
Selecting algorithms, sensors, and linear bases for optimum spectral recovery of skylight.
López-Alvarez, Miguel A; Hernández-Andrés, Javier; Valero, Eva M; Romero, Javier
2007-04-01
In a previous work [Appl. Opt.44, 5688 (2005)] we found the optimum sensors for a planned multispectral system for measuring skylight in the presence of noise by adapting a linear spectral recovery algorithm proposed by Maloney and Wandell [J. Opt. Soc. Am. A3, 29 (1986)]. Here we continue along these lines by simulating the responses of three to five Gaussian sensors and recovering spectral information from noise-affected sensor data by trying out four different estimation algorithms, three different sizes for the training set of spectra, and various linear bases. We attempt to find the optimum combination of sensors, recovery method, linear basis, and matrix size to recover the best skylight spectral power distributions from colorimetric and spectral (in the visible range) points of view. We show how all these parameters play an important role in the practical design of a real multispectral system and how to obtain several relevant conclusions from simulating the behavior of sensors in the presence of noise.
Evaluation of algorithm methods for fluorescence spectra of cancerous and normal human tissues
NASA Astrophysics Data System (ADS)
Pu, Yang; Wang, Wubao; Alfano, Robert R.
2016-03-01
The paper focus on the various algorithms on to unravel the fluorescence spectra by unmixing methods to identify cancerous and normal human tissues from the measured fluorescence spectroscopy. The biochemical or morphologic changes that cause fluorescence spectra variations would appear earlier than the histological approach; therefore, fluorescence spectroscopy holds a great promise as clinical tool for diagnosing early stage of carcinomas and other deceases for in vivo use. The method can further identify tissue biomarkers by decomposing the spectral contributions of different fluorescent molecules of interest. In this work, we investigate the performance of blind source un-mixing methods (backward model) and spectral fitting approaches (forward model) in decomposing the contributions of key fluorescent molecules from the tissue mixture background when certain selected excitation wavelength is applied. Pairs of adenocarcinoma as well as normal tissues confirmed by pathologist were excited by selective wavelength of 340 nm. The emission spectra of resected fresh tissue were used to evaluate the relative changes of collagen, reduced nicotinamide adenine dinucleotide (NADH), and Flavin by various spectral un-mixing methods. Two categories of algorithms: forward methods and Blind Source Separation [such as Principal Component Analysis (PCA) and Independent Component Analysis (ICA), and Nonnegative Matrix Factorization (NMF)] will be introduced and evaluated. The purpose of the spectral analysis is to discard the redundant information which conceals the difference between these two types of tissues, but keep their diagnostically significance. The facts predicted by different methods were compared to the gold standard of histopathology. The results indicate that these key fluorophores within tissue, e.g. tryptophan, collagen, and NADH, and flavin, show differences of relative contents of fluorophores among different types of human cancer and normal tissues. The sensitivity, specificity, and receiver operating characteristic (ROC) are finally employed as the criteria to evaluate the efficacy of these methods in cancer detection. The underlying physical and biological basis for these optical approaches will be discussed with examples. This ex vivo preliminary trial demonstrates that these different criteria from different methods can distinguish carcinoma from normal tissues with good sensitivity and specificity while among them, we found that ICA appears to be the superior method in predication accuracy.
Exorcising the Ghost in the Machine: Synthetic Spectral Data Cubes for Assessing Big Data Algorithms
NASA Astrophysics Data System (ADS)
Araya, M.; Solar, M.; Mardones, D.; Hochfärber, T.
2015-09-01
The size and quantity of the data that is being generated by large astronomical projects like ALMA, requires a paradigm change in astronomical data analysis. Complex data, such as highly sensitive spectroscopic data in the form of large data cubes, are not only difficult to manage, transfer and visualize, but they make traditional data analysis techniques unfeasible. Consequently, the attention has been placed on machine learning and artificial intelligence techniques, to develop approximate and adaptive methods for astronomical data analysis within a reasonable computational time. Unfortunately, these techniques are usually sub optimal, stochastic and strongly dependent of the parameters, which could easily turn into “a ghost in the machine” for astronomers and practitioners. Therefore, a proper assessment of these methods is not only desirable but mandatory for trusting them in large-scale usage. The problem is that positively verifiable results are scarce in astronomy, and moreover, science using bleeding-edge instrumentation naturally lacks of reference values. We propose an Astronomical SYnthetic Data Observations (ASYDO), a virtual service that generates synthetic spectroscopic data in the form of data cubes. The objective of the tool is not to produce accurate astrophysical simulations, but to generate a large number of labelled synthetic data, to assess advanced computing algorithms for astronomy and to develop novel Big Data algorithms. The synthetic data is generated using a set of spectral lines, template functions for spatial and spectral distributions, and simple models that produce reasonable synthetic observations. Emission lines are obtained automatically using IVOA's SLAP protocol (or from a relational database) and their spectral profiles correspond to distributions in the exponential family. The spatial distributions correspond to simple functions (e.g., 2D Gaussian), or to scalable template objects. The intensity, broadening and radial velocity of each line is given by very simple and naive physical models, yet ASYDO's generic implementation supports new user-made models, which potentially allows adding more realistic simulations. The resulting data cube is saved as a FITS file, also including all the tables and images used for generating the cube. We expect to implement ASYDO as a virtual observatory service in the near future.
Multiscale 3D Shape Analysis using Spherical Wavelets
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2013-01-01
Shape priors attempt to represent biological variations within a population. When variations are global, Principal Component Analysis (PCA) can be used to learn major modes of variation, even from a limited training set. However, when significant local variations exist, PCA typically cannot represent such variations from a small training set. To address this issue, we present a novel algorithm that learns shape variations from data at multiple scales and locations using spherical wavelets and spectral graph partitioning. Our results show that when the training set is small, our algorithm significantly improves the approximation of shapes in a testing set over PCA, which tends to oversmooth data. PMID:16685992
Multiscale 3D shape analysis using spherical wavelets.
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen R
2005-01-01
Shape priors attempt to represent biological variations within a population. When variations are global, Principal Component Analysis (PCA) can be used to learn major modes of variation, even from a limited training set. However, when significant local variations exist, PCA typically cannot represent such variations from a small training set. To address this issue, we present a novel algorithm that learns shape variations from data at multiple scales and locations using spherical wavelets and spectral graph partitioning. Our results show that when the training set is small, our algorithm significantly improves the approximation of shapes in a testing set over PCA, which tends to oversmooth data.
SPAM- SPECTRAL ANALYSIS MANAGER (DEC VAX/VMS VERSION)
NASA Technical Reports Server (NTRS)
Solomon, J. E.
1994-01-01
The Spectral Analysis Manager (SPAM) was developed to allow easy qualitative analysis of multi-dimensional imaging spectrometer data. Imaging spectrometers provide sufficient spectral sampling to define unique spectral signatures on a per pixel basis. Thus direct material identification becomes possible for geologic studies. SPAM provides a variety of capabilities for carrying out interactive analysis of the massive and complex datasets associated with multispectral remote sensing observations. In addition to normal image processing functions, SPAM provides multiple levels of on-line help, a flexible command interpretation, graceful error recovery, and a program structure which can be implemented in a variety of environments. SPAM was designed to be visually oriented and user friendly with the liberal employment of graphics for rapid and efficient exploratory analysis of imaging spectrometry data. SPAM provides functions to enable arithmetic manipulations of the data, such as normalization, linear mixing, band ratio discrimination, and low-pass filtering. SPAM can be used to examine the spectra of an individual pixel or the average spectra over a number of pixels. SPAM also supports image segmentation, fast spectral signature matching, spectral library usage, mixture analysis, and feature extraction. High speed spectral signature matching is performed by using a binary spectral encoding algorithm to separate and identify mineral components present in the scene. The same binary encoding allows automatic spectral clustering. Spectral data may be entered from a digitizing tablet, stored in a user library, compared to the master library containing mineral standards, and then displayed as a timesequence spectral movie. The output plots, histograms, and stretched histograms produced by SPAM can be sent to a lineprinter, stored as separate RGB disk files, or sent to a Quick Color Recorder. SPAM is written in C for interactive execution and is available for two different machine environments. There is a DEC VAX/VMS version with a central memory requirement of approximately 242K of 8 bit bytes and a machine independent UNIX 4.2 version. The display device currently supported is the Raster Technologies display processor. Other 512 x 512 resolution color display devices, such as De Anza, may be added with minor code modifications. This program was developed in 1986.
SPAM- SPECTRAL ANALYSIS MANAGER (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Solomon, J. E.
1994-01-01
The Spectral Analysis Manager (SPAM) was developed to allow easy qualitative analysis of multi-dimensional imaging spectrometer data. Imaging spectrometers provide sufficient spectral sampling to define unique spectral signatures on a per pixel basis. Thus direct material identification becomes possible for geologic studies. SPAM provides a variety of capabilities for carrying out interactive analysis of the massive and complex datasets associated with multispectral remote sensing observations. In addition to normal image processing functions, SPAM provides multiple levels of on-line help, a flexible command interpretation, graceful error recovery, and a program structure which can be implemented in a variety of environments. SPAM was designed to be visually oriented and user friendly with the liberal employment of graphics for rapid and efficient exploratory analysis of imaging spectrometry data. SPAM provides functions to enable arithmetic manipulations of the data, such as normalization, linear mixing, band ratio discrimination, and low-pass filtering. SPAM can be used to examine the spectra of an individual pixel or the average spectra over a number of pixels. SPAM also supports image segmentation, fast spectral signature matching, spectral library usage, mixture analysis, and feature extraction. High speed spectral signature matching is performed by using a binary spectral encoding algorithm to separate and identify mineral components present in the scene. The same binary encoding allows automatic spectral clustering. Spectral data may be entered from a digitizing tablet, stored in a user library, compared to the master library containing mineral standards, and then displayed as a timesequence spectral movie. The output plots, histograms, and stretched histograms produced by SPAM can be sent to a lineprinter, stored as separate RGB disk files, or sent to a Quick Color Recorder. SPAM is written in C for interactive execution and is available for two different machine environments. There is a DEC VAX/VMS version with a central memory requirement of approximately 242K of 8 bit bytes and a machine independent UNIX 4.2 version. The display device currently supported is the Raster Technologies display processor. Other 512 x 512 resolution color display devices, such as De Anza, may be added with minor code modifications. This program was developed in 1986.
Spectral signature verification using statistical analysis and text mining
NASA Astrophysics Data System (ADS)
DeCoster, Mallory E.; Firpi, Alexe H.; Jacobs, Samantha K.; Cone, Shelli R.; Tzeng, Nigel H.; Rodriguez, Benjamin M.
2016-05-01
In the spectral science community, numerous spectral signatures are stored in databases representative of many sample materials collected from a variety of spectrometers and spectroscopists. Due to the variety and variability of the spectra that comprise many spectral databases, it is necessary to establish a metric for validating the quality of spectral signatures. This has been an area of great discussion and debate in the spectral science community. This paper discusses a method that independently validates two different aspects of a spectral signature to arrive at a final qualitative assessment; the textual meta-data and numerical spectral data. Results associated with the spectral data stored in the Signature Database1 (SigDB) are proposed. The numerical data comprising a sample material's spectrum is validated based on statistical properties derived from an ideal population set. The quality of the test spectrum is ranked based on a spectral angle mapper (SAM) comparison to the mean spectrum derived from the population set. Additionally, the contextual data of a test spectrum is qualitatively analyzed using lexical analysis text mining. This technique analyzes to understand the syntax of the meta-data to provide local learning patterns and trends within the spectral data, indicative of the test spectrum's quality. Text mining applications have successfully been implemented for security2 (text encryption/decryption), biomedical3 , and marketing4 applications. The text mining lexical analysis algorithm is trained on the meta-data patterns of a subset of high and low quality spectra, in order to have a model to apply to the entire SigDB data set. The statistical and textual methods combine to assess the quality of a test spectrum existing in a database without the need of an expert user. This method has been compared to other validation methods accepted by the spectral science community, and has provided promising results when a baseline spectral signature is present for comparison. The spectral validation method proposed is described from a practical application and analytical perspective.
Spectral Indices of Faint Radio Sources
NASA Astrophysics Data System (ADS)
Gim, Hansung B.; Hales, Christopher A.; Momjian, Emmanuel; Yun, Min Su
2015-01-01
The significant improvement in bandwidth and the resultant sensitivity offered by the Karl G. Jansky Very Large Array (VLA) allows us to explore the faint radio source population. Through the study of the radio continuum we can explore the spectral indices of these radio sources. Robust radio spectral indices are needed for accurate k-corrections, for example in the study of the radio - far-infrared (FIR) correlation. We present an analysis of measuring spectral indices using two different approaches. In the first, we use the standard wideband imaging algorithm in the data reduction package CASA. In the second, we use a traditional approach of imaging narrower bandwidths to derive the spectral indices. For these, we simulated data to match the observing parameter space of the CHILES Con Pol survey (Hales et al. 2014). We investigate the accuracy and precision of spectral index measurements as a function of signal-to noise, and explore the requirements to reliably probe possible evolution of the radio-FIR correlation in CHILES Con Pol.
Spectral decompositions of multiple time series: a Bayesian non-parametric approach.
Macaro, Christian; Prado, Raquel
2014-01-01
We consider spectral decompositions of multiple time series that arise in studies where the interest lies in assessing the influence of two or more factors. We write the spectral density of each time series as a sum of the spectral densities associated to the different levels of the factors. We then use Whittle's approximation to the likelihood function and follow a Bayesian non-parametric approach to obtain posterior inference on the spectral densities based on Bernstein-Dirichlet prior distributions. The prior is strategically important as it carries identifiability conditions for the models and allows us to quantify our degree of confidence in such conditions. A Markov chain Monte Carlo (MCMC) algorithm for posterior inference within this class of frequency-domain models is presented.We illustrate the approach by analyzing simulated and real data via spectral one-way and two-way models. In particular, we present an analysis of functional magnetic resonance imaging (fMRI) brain responses measured in individuals who participated in a designed experiment to study pain perception in humans.
Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang
2014-01-01
We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models. PMID:25264474
Improving Automated Endmember Identification for Linear Unmixing of HyspIRI Spectral Data.
NASA Astrophysics Data System (ADS)
Gader, P.
2016-12-01
The size of data sets produced by imaging spectrometers is increasing rapidly. There is already a processing bottleneck. Part of the reason for this bottleneck is the need for expert input using interactive software tools. This process can be very time consuming and laborious but is currently crucial to ensuring the quality of the analysis. Automated algorithms can mitigate this problem. Although it is unlikely that processing systems can become completely automated, there is an urgent need to increase the level of automation. Spectral unmixing is a key component to processing HyspIRI data. Algorithms such as MESMA have been demonstrated to achieve results but require carefully, expert construction of endmember libraries. Unfortunately, many endmembers found by automated algorithms for finding endmembers are deemed unsuitable by experts because they are not physically reasonable. Unfortunately, endmembers that are not physically reasonable can achieve very low errors between the linear mixing model with those endmembers and the original data. Therefore, this error is not a reasonable way to resolve the problem on "non-physical" endmembers. There are many potential approaches for resolving these issues, including using Bayesian priors, but very little attention has been given to this problem. The study reported on here considers a modification of the Sparsity Promoting Iterated Constrained Endmember (SPICE) algorithm. SPICE finds endmembers and abundances and estimates the number of endmembers. The SPICE algorithm seeks to minimize a quadratic objective function with respect to endmembers E and fractions P. The modified SPICE algorithm, which we refer to as SPICED, is obtained by adding the term D to the objective function. The term D pressures the algorithm to minimize sum of the squared differences between each endmember and a weighted sum of the data. By appropriately modifying the, the endmembers are pushed towards a subset of the data with the potential for becoming exactly equal to the data points. The algorithm has been applied to spectral data and the differences between the endmembers resulting from ecorded. The results so far are that the endmembers found SPICED are approximately 25% closer to the data with indistinguishable reconstruction error compared to those found using SPICE.
Motion Cueing Algorithm Development: Piloted Performance Testing of the Cueing Algorithms
NASA Technical Reports Server (NTRS)
Houck, Jacob A. (Technical Monitor); Telban, Robert J.; Cardullo, Frank M.; Kelly, Lon C.
2005-01-01
The relative effectiveness in simulating aircraft maneuvers with both current and newly developed motion cueing algorithms was assessed with an eleven-subject piloted performance evaluation conducted on the NASA Langley Visual Motion Simulator (VMS). In addition to the current NASA adaptive algorithm, two new cueing algorithms were evaluated: the optimal algorithm and the nonlinear algorithm. The test maneuvers included a straight-in approach with a rotating wind vector, an offset approach with severe turbulence and an on/off lateral gust that occurs as the aircraft approaches the runway threshold, and a takeoff both with and without engine failure after liftoff. The maneuvers were executed with each cueing algorithm with added visual display delay conditions ranging from zero to 200 msec. Two methods, the quasi-objective NASA Task Load Index (TLX), and power spectral density analysis of pilot control, were used to assess pilot workload. Piloted performance parameters for the approach maneuvers, the vertical velocity upon touchdown and the runway touchdown position, were also analyzed but did not show any noticeable difference among the cueing algorithms. TLX analysis reveals, in most cases, less workload and variation among pilots with the nonlinear algorithm. Control input analysis shows pilot-induced oscillations on a straight-in approach were less prevalent compared to the optimal algorithm. The augmented turbulence cues increased workload on an offset approach that the pilots deemed more realistic compared to the NASA adaptive algorithm. The takeoff with engine failure showed the least roll activity for the nonlinear algorithm, with the least rudder pedal activity for the optimal algorithm.
Elgendi, Mohamed; Kumar, Shine; Guo, Long; Rutledge, Jennifer; Coe, James Y.; Zemp, Roger; Schuurmans, Dale; Adatia, Ian
2015-01-01
Background Automatic detection of the 1st (S1) and 2nd (S2) heart sounds is difficult, and existing algorithms are imprecise. We sought to develop a wavelet-based algorithm for the detection of S1 and S2 in children with and without pulmonary arterial hypertension (PAH). Method Heart sounds were recorded at the second left intercostal space and the cardiac apex with a digital stethoscope simultaneously with pulmonary arterial pressure (PAP). We developed a Daubechies wavelet algorithm for the automatic detection of S1 and S2 using the wavelet coefficient ‘D 6’ based on power spectral analysis. We compared our algorithm with four other Daubechies wavelet-based algorithms published by Liang, Kumar, Wang, and Zhong. We annotated S1 and S2 from an audiovisual examination of the phonocardiographic tracing by two trained cardiologists and the observation that in all subjects systole was shorter than diastole. Results We studied 22 subjects (9 males and 13 females, median age 6 years, range 0.25–19). Eleven subjects had a mean PAP < 25 mmHg. Eleven subjects had PAH with a mean PAP ≥ 25 mmHg. All subjects had a pulmonary artery wedge pressure ≤ 15 mmHg. The sensitivity (SE) and positive predictivity (+P) of our algorithm were 70% and 68%, respectively. In comparison, the SE and +P of Liang were 59% and 42%, Kumar 19% and 12%, Wang 50% and 45%, and Zhong 43% and 53%, respectively. Our algorithm demonstrated robustness and outperformed the other methods up to a signal-to-noise ratio (SNR) of 10 dB. For all algorithms, detection errors arose from low-amplitude peaks, fast heart rates, low signal-to-noise ratio, and fixed thresholds. Conclusion Our algorithm for the detection of S1 and S2 improves the performance of existing Daubechies-based algorithms and justifies the use of the wavelet coefficient ‘D 6’ through power spectral analysis. Also, the robustness despite ambient noise may improve real world clinical performance. PMID:26629704
A comparative intelligibility study of single-microphone noise reduction algorithms.
Hu, Yi; Loizou, Philipos C
2007-09-01
The evaluation of intelligibility of noise reduction algorithms is reported. IEEE sentences and consonants were corrupted by four types of noise including babble, car, street and train at two signal-to-noise ratio levels (0 and 5 dB), and then processed by eight speech enhancement methods encompassing four classes of algorithms: spectral subtractive, sub-space, statistical model based and Wiener-type algorithms. The enhanced speech was presented to normal-hearing listeners for identification. With the exception of a single noise condition, no algorithm produced significant improvements in speech intelligibility. Information transmission analysis of the consonant confusion matrices indicated that no algorithm improved significantly the place feature score, significantly, which is critically important for speech recognition. The algorithms which were found in previous studies to perform the best in terms of overall quality, were not the same algorithms that performed the best in terms of speech intelligibility. The subspace algorithm, for instance, was previously found to perform the worst in terms of overall quality, but performed well in the present study in terms of preserving speech intelligibility. Overall, the analysis of consonant confusion matrices suggests that in order for noise reduction algorithms to improve speech intelligibility, they need to improve the place and manner feature scores.
NASA Astrophysics Data System (ADS)
Ming, Mei-Jun; Xu, Long-Kun; Wang, Fan; Bi, Ting-Jun; Li, Xiang-Yuan
2017-07-01
In this work, a matrix form of numerical algorithm for spectral shift is presented based on the novel nonequilibrium solvation model that is established by introducing the constrained equilibrium manipulation. This form is convenient for the development of codes for numerical solution. By means of the integral equation formulation polarizable continuum model (IEF-PCM), a subroutine has been implemented to compute spectral shift numerically. Here, the spectral shifts of absorption spectra for several popular chromophores, N,N-diethyl-p-nitroaniline (DEPNA), methylenecyclopropene (MCP), acrolein (ACL) and p-nitroaniline (PNA) were investigated in different solvents with various polarities. The computed spectral shifts can explain the available experimental findings reasonably. Discussions were made on the contributions of solute geometry distortion, electrostatic polarization and other non-electrostatic interactions to spectral shift.
Directional Unfolded Source Term (DUST) for Compton Cameras.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Dean J.; Horne, Steven M.; O'Brien, Sean
2018-03-01
A Directional Unfolded Source Term (DUST) algorithm was developed to enable improved spectral analysis capabilities using data collected by Compton cameras. Achieving this objective required modification of the detector response function in the Gamma Detector Response and Analysis Software (GADRAS). Experimental data that were collected in support of this work include measurements of calibration sources at a range of separation distances and cylindrical depleted uranium castings.
Deng, Ning; Li, Zhenye; Pan, Chao; Duan, Huilong
2015-01-01
Study of complex proteome brings forward higher request for the quantification method using mass spectrometry technology. In this paper, we present a mass spectrometry label-free quantification tool for complex proteomes, called freeQuant, which integrated quantification with functional analysis effectively. freeQuant consists of two well-integrated modules: label-free quantification and functional analysis with biomedical knowledge. freeQuant supports label-free quantitative analysis which makes full use of tandem mass spectrometry (MS/MS) spectral count, protein sequence length, shared peptides, and ion intensity. It adopts spectral count for quantitative analysis and builds a new method for shared peptides to accurately evaluate abundance of isoforms. For proteins with low abundance, MS/MS total ion count coupled with spectral count is included to ensure accurate protein quantification. Furthermore, freeQuant supports the large-scale functional annotations for complex proteomes. Mitochondrial proteomes from the mouse heart, the mouse liver, and the human heart were used to evaluate the usability and performance of freeQuant. The evaluation showed that the quantitative algorithms implemented in freeQuant can improve accuracy of quantification with better dynamic range.
Grewal, Dilraj S; Tanna, Angelo P
2013-03-01
With the rapid adoption of spectral domain optical coherence tomography (SDOCT) in clinical practice and the recent advances in software technology, there is a need for a review of the literature on glaucoma detection and progression analysis algorithms designed for the commercially available instruments. Peripapillary retinal nerve fiber layer (RNFL) thickness and macular thickness, including segmental macular thickness calculation algorithms, have been demonstrated to be repeatable and reproducible, and have a high degree of diagnostic sensitivity and specificity in discriminating between healthy and glaucomatous eyes across the glaucoma continuum. Newer software capabilities such as glaucoma progression detection algorithms provide an objective analysis of longitudinally obtained structural data that enhances our ability to detect glaucomatous progression. RNFL measurements obtained with SDOCT appear more sensitive than time domain OCT (TDOCT) for glaucoma progression detection; however, agreement with the assessments of visual field progression is poor. Over the last few years, several studies have been performed to assess the diagnostic performance of SDOCT structural imaging and its validity in assessing glaucoma progression. Most evidence suggests that SDOCT performs similarly to TDOCT for glaucoma diagnosis; however, SDOCT may be superior for the detection of early stage disease. With respect to progression detection, SDOCT represents an important technological advance because of its improved resolution and repeatability. Advancements in RNFL thickness quantification, segmental macular thickness calculation and progression detection algorithms, when used correctly, may help to improve our ability to diagnose and manage glaucoma.
NASA Technical Reports Server (NTRS)
Rudasill-Neigh, Christopher S.; Bolton, Douglas K.; Diabate, Mouhamad; Williams, Jennifer J.; Carvalhais, Nuno
2014-01-01
Forests contain a majority of the aboveground carbon (C) found in ecosystems, and understanding biomass lost from disturbance is essential to improve our C-cycle knowledge. Our study region in the Wisconsin and Minnesota Laurentian Forest had a strong decline in Normalized Difference Vegetation Index (NDVI) from 1982 to 2007, observed with the National Ocean and Atmospheric Administration's (NOAA) series of Advanced Very High Resolution Radiometer (AVHRR). To understand the potential role of disturbances in the terrestrial C-cycle, we developed an algorithm to map forest disturbances from either harvest or insect outbreak for Landsat time-series stacks. We merged two image analysis approaches into one algorithm to monitor forest change that included: (1) multiple disturbance index thresholds to capture clear-cut harvest; and (2) a spectral trajectory-based image analysis with multiple confidence interval thresholds to map insect outbreak. We produced 20 maps and evaluated classification accuracy with air-photos and insect air-survey data to understand the performance of our algorithm. We achieved overall accuracies ranging from 65% to 75%, with an average accuracy of 72%. The producer's and user's accuracy ranged from a maximum of 32% to 70% for insect disturbance, 60% to 76% for insect mortality and 82% to 88% for harvested forest, which was the dominant disturbance agent. Forest disturbances accounted for 22% of total forested area (7349 km2). Our algorithm provides a basic approach to map disturbance history where large impacts to forest stands have occurred and highlights the limited spectral sensitivity of Landsat time-series to outbreaks of defoliating insects. We found that only harvest and insect mortality events can be mapped with adequate accuracy with a non-annual Landsat time-series. This limited our land cover understanding of NDVI decline drivers. We demonstrate that to capture more subtle disturbances with spectral trajectories, future observations must be temporally dense to distinguish between type and frequency in heterogeneous landscapes.
USGS Digital Spectral Library splib06a
Clark, Roger N.; Swayze, Gregg A.; Wise, Richard A.; Livo, K. Eric; Hoefen, Todd M.; Kokaly, Raymond F.; Sutley, Stephen J.
2007-01-01
Introduction We have assembled a digital reflectance spectral library that covers the wavelength range from the ultraviolet to far infrared along with sample documentation. The library includes samples of minerals, rocks, soils, physically constructed as well as mathematically computed mixtures, plants, vegetation communities, microorganisms, and man-made materials. The samples and spectra collected were assembled for the purpose of using spectral features for the remote detection of these and similar materials. Analysis of spectroscopic data from laboratory, aircraft, and spacecraft instrumentation requires a knowledge base. The spectral library discussed here forms a knowledge base for the spectroscopy of minerals and related materials of importance to a variety of research programs being conducted at the U.S. Geological Survey. Much of this library grew out of the need for spectra to support imaging spectroscopy studies of the Earth and planets. Imaging spectrometers, such as the National Aeronautics and Space Administration (NASA) Airborne Visible/Infra Red Imaging Spectrometer (AVIRIS) or the NASA Cassini Visual and Infrared Mapping Spectrometer (VIMS) which is currently orbiting Saturn, have narrow bandwidths in many contiguous spectral channels that permit accurate definition of absorption features in spectra from a variety of materials. Identification of materials from such data requires a comprehensive spectral library of minerals, vegetation, man-made materials, and other subjects in the scene. Our research involves the use of the spectral library to identify the components in a spectrum of an unknown. Therefore, the quality of the library must be very good. However, the quality required in a spectral library to successfully perform an investigation depends on the scientific questions to be answered and the type of algorithms to be used. For example, to map a mineral using imaging spectroscopy and the mapping algorithm of Clark and others (1990a, 2003b), one simply needs a diagnostic absorption band. The mapping system uses continuum-removed reference spectral features fitted to features in observed spectra. Spectral features for such algorithms can be obtained from a spectrum of a sample containing large amounts of contaminants, including those that add other spectral features, as long as the shape of the diagnostic feature of interest is not modified. If, however, the data are needed for radiative transfer models to derive mineral abundances from reflectance spectra, then completely uncontaminated spectra are required. This library contains spectra that span a range of quality, with purity indicators to flag spectra for (or against) particular uses. Acquiring spectral measurements and performing sample characterizations for this library has taken about 15 person-years of effort. Software to manage the library and provide scientific analysis capability is provided (Clark, 1980, 1993). A personal computer (PC) reader for the library is also available (Livo and others, 1993). The program reads specpr binary files (Clark, 1980, 1993) and plots spectra. Another program that reads the specpr format is written in IDL (Kokaly, 2005). In our view, an ideal spectral library consists of samples covering a very wide range of materials, has large wavelength range with very high precision, and has enough sample analyses and documentation to establish the quality of the spectra. Time and available resources limit what can be achieved. Ideally, for each mineral, the sample analysis would include X-ray diffraction (XRD), electron microprobe (EM) or X-ray fluorescence (XRF), and petrographic microscopic analyses. For some minerals, such as iron oxides, additional analyses such as Mossbauer would be helpful. We have found that to make the basic spectral measurements, provide XRD, EM or XRF analyses, and microscopic analyses, document the results, and complete an entry of one spectral library sample, all takes about
Fang, Jieming; Zhang, Da; Wilcox, Carol; Heidinger, Benedikt; Raptopoulos, Vassilios; Brook, Alexander; Brook, Olga R
2017-03-01
To assess single energy metal artifact reduction (SEMAR) and spectral energy metal artifact reduction (MARS) algorithms in reducing artifacts generated by different metal implants. Phantom was scanned with and without SEMAR (Aquilion One, Toshiba) and MARS (Discovery CT750 HD, GE), with various metal implants. Images were evaluated objectively by measuring standard deviation in regions of interests and subjectively by two independent reviewers grading on a scale of 0 (no artifact) to 4 (severe artifact). Reviewers also graded new artifacts introduced by metal artifact reduction algorithms. SEMAR and MARS significantly decreased variability of the density measurement adjacent to the metal implant, with median SD (standard deviation of density measurement) of 52.1 HU without SEMAR, vs. 12.3 HU with SEMAR, p < 0.001. Median SD without MARS of 63.1 HU decreased to 25.9 HU with MARS, p < 0.001. Median SD with SEMAR is significantly lower than median SD with MARS (p = 0.0011). SEMAR improved subjective image quality with reduction in overall artifacts grading from 3.2 ± 0.7 to 1.4 ± 0.9, p < 0.001. Improvement of overall image quality by MARS has not reached statistical significance (3.2 ± 0.6 to 2.6 ± 0.8, p = 0.088). There was a significant introduction of artifacts introduced by metal artifact reduction algorithm for MARS with 2.4 ± 1.0, but minimal with SEMAR 0.4 ± 0.7, p < 0.001. CT iterative reconstruction algorithms with single and spectral energy are both effective in reduction of metal artifacts. Single energy-based algorithm provides better overall image quality than spectral CT-based algorithm. Spectral metal artifact reduction algorithm introduces mild to moderate artifacts in the far field.
Trace gas detection in hyperspectral imagery using the wavelet packet subspace
NASA Astrophysics Data System (ADS)
Salvador, Mark A. Z.
This dissertation describes research into a new remote sensing method to detect trace gases in hyperspectral and ultra-spectral data. This new method is based on the wavelet packet transform. It attempts to improve both the computational tractability and the detection of trace gases in airborne and spaceborne spectral imagery. Atmospheric trace gas research supports various Earth science disciplines to include climatology, vulcanology, pollution monitoring, natural disasters, and intelligence and military applications. Hyperspectral and ultra-spectral data significantly increases the data glut of existing Earth science data sets. Spaceborne spectral data in particular significantly increases spectral resolution while performing daily global collections of the earth. Application of the wavelet packet transform to the spectral space of hyperspectral and ultra-spectral imagery data potentially improves remote sensing detection algorithms. It also facilities the parallelization of these methods for high performance computing. This research seeks two science goals, (1) developing a new spectral imagery detection algorithm, and (2) facilitating the parallelization of trace gas detection in spectral imagery data.
Low complexity feature extraction for classification of harmonic signals
NASA Astrophysics Data System (ADS)
William, Peter E.
In this dissertation, feature extraction algorithms have been developed for extraction of characteristic features from harmonic signals. The common theme for all developed algorithms is the simplicity in generating a significant set of features directly from the time domain harmonic signal. The features are a time domain representation of the composite, yet sparse, harmonic signature in the spectral domain. The algorithms are adequate for low-power unattended sensors which perform sensing, feature extraction, and classification in a standalone scenario. The first algorithm generates the characteristic features using only the duration between successive zero-crossing intervals. The second algorithm estimates the harmonics' amplitudes of the harmonic structure employing a simplified least squares method without the need to estimate the true harmonic parameters of the source signal. The third algorithm, resulting from a collaborative effort with Daniel White at the DSP Lab, University of Nebraska-Lincoln, presents an analog front end approach that utilizes a multichannel analog projection and integration to extract the sparse spectral features from the analog time domain signal. Classification is performed using a multilayer feedforward neural network. Evaluation of the proposed feature extraction algorithms for classification through the processing of several acoustic and vibration data sets (including military vehicles and rotating electric machines) with comparison to spectral features shows that, for harmonic signals, time domain features are simpler to extract and provide equivalent or improved reliability over the spectral features in both the detection probabilities and false alarm rate.
Research on Remote Sensing Image Classification Based on Feature Level Fusion
NASA Astrophysics Data System (ADS)
Yuan, L.; Zhu, G.
2018-04-01
Remote sensing image classification, as an important direction of remote sensing image processing and application, has been widely studied. However, in the process of existing classification algorithms, there still exists the phenomenon of misclassification and missing points, which leads to the final classification accuracy is not high. In this paper, we selected Sentinel-1A and Landsat8 OLI images as data sources, and propose a classification method based on feature level fusion. Compare three kind of feature level fusion algorithms (i.e., Gram-Schmidt spectral sharpening, Principal Component Analysis transform and Brovey transform), and then select the best fused image for the classification experimental. In the classification process, we choose four kinds of image classification algorithms (i.e. Minimum distance, Mahalanobis distance, Support Vector Machine and ISODATA) to do contrast experiment. We use overall classification precision and Kappa coefficient as the classification accuracy evaluation criteria, and the four classification results of fused image are analysed. The experimental results show that the fusion effect of Gram-Schmidt spectral sharpening is better than other methods. In four kinds of classification algorithms, the fused image has the best applicability to Support Vector Machine classification, the overall classification precision is 94.01 % and the Kappa coefficients is 0.91. The fused image with Sentinel-1A and Landsat8 OLI is not only have more spatial information and spectral texture characteristics, but also enhances the distinguishing features of the images. The proposed method is beneficial to improve the accuracy and stability of remote sensing image classification.
Improved Surface Parameter Retrievals using AIRS/AMSU Data
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John
2008-01-01
The AIRS Science Team Version 5.0 retrieval algorithm became operational at the Goddard DAAC in July 2007 generating near real-time products from analysis of AIRS/AMSU sounding data. This algorithm contains many significant theoretical advances over the AIRS Science Team Version 4.0 retrieval algorithm used previously. Two very significant developments of Version 5 are: 1) the development and implementation of an improved Radiative Transfer Algorithm (RTA) which allows for accurate treatment of non-Local Thermodynamic Equilibrium (non-LTE) effects on shortwave sounding channels; and 2) the development of methodology to obtain very accurate case by case product error estimates which are in turn used for quality control. These theoretical improvements taken together enabled a new methodology to be developed which further improves soundings in partially cloudy conditions. In this methodology, longwave C02 channel observations in the spectral region 700 cm(exp -1) to 750 cm(exp -1) are used exclusively for cloud clearing purposes, while shortwave C02 channels in the spectral region 2195 cm(exp -1) 2395 cm(exp -1) are used for temperature sounding purposes. This allows for accurate temperature soundings under more difficult cloud conditions. This paper further improves on the methodology used in Version 5 to derive surface skin temperature and surface spectral emissivity from AIRS/AMSU observations. Now, following the approach used to improve tropospheric temperature profiles, surface skin temperature is also derived using only shortwave window channels. This produces improved surface parameters, both day and night, compared to what was obtained in Version 5. These in turn result in improved boundary layer temperatures and retrieved total O3 burden.
Data Retrieval Algorithm and Uncertainty Analysis for a Miniaturized, Laser Heterodyne Radiometer
NASA Astrophysics Data System (ADS)
Miller, J. H.; Melroy, H.; Wilson, E. L.; Clarke, G. B.
2013-12-01
In a collaboration between NASA Goddard Space Flight Center and George Washington University, a low-cost, surface instrument is being developed that can continuously monitor key carbon cycle gases in the atmospheric column: carbon dioxide (CO2) and methane (CH4). The instrument is based on a miniaturized, laser heterodyne radiometer (LHR) using near infrared (NIR) telecom lasers. Despite relatively weak absorption line strengths in this spectral region, spectrally-resolved atmospheric column absorptions for these two molecules fall in the range of 60-80% and thus sensitive and precise measurements of column concentrations are possible. Further, because the LHR technique has the potential for sub-Doppler spectral resolution, the possibility exists for interrogating line shapes to extract altitude profiles of the greenhouse gases. From late 2012 through 2013 the instrument was deployed for a variety of field measurements including at Park Falls, Wisconsin; Castle Airport near Atwater, California; and at the NOAA Mauna Loa Observatory in Hawaii. For each subsequent campaign, improvement in the figures of merit for the instrument (notably spectral sweep time and absorbance noise) has been observed. For the latter, the absorbance noise is approaching 0.002 optical density (OD) noise on a 1.8 OD signal. This presentation presents an overview of the measurement campaigns in the context of the data retrieval algorithm under development at GW for the calculation of column concentrations from them. For light transmission through the atmosphere, it is necessary to account for variation of pressure, temperature, composition, and refractive index through the atmosphere that are all functions of latitude, longitude, time of day, altitude, etc. In our initial work we began with coding developed under the LOWTRAN and MODTRAN programs by the AFOSR (and others). We also assumed temperature and pressure profiles from the 1976 US Standard Atmosphere and used the US Naval Observatory database for local zenith angle calculations to initialize path trajectory calculations. In our newest version of the retrieval algorithm, the Python programming language module PySolar is used for the path geometry calculations. For temperature, pressure, and humidity profiles with altitude we use the Modern-Era Retrospective Analysis for Research and Applications (MERRA) data that has been compiled every 6 hours. Spectral simulation is accomplished by integrating short-path segments along the trajectory using the SpecSyn spectral simulation suite developed at GW. Column concentrations are extracted by minimizing residuals between observed and modeled spectrum using the Nelder-Mead simplex algorithm as implemented in the SciPy Python module. We will also present an assessment of uncertainty in the reported concentrations from assumptions made in the meteorological data, LHR instrument and tracker noise, and radio frequency bandwidth and describe additional future goals in instrument development and deployment targets.
NASA Astrophysics Data System (ADS)
Bruce, L. M.; Ball, J. E.; Evangilista, P.; Stohlgren, T. J.
2006-12-01
Nonnative invasive species adversely impact ecosystems, causing loss of native plant diversity, species extinction, and impairment of wildlife habitats. As a result, over the past decade federal and state agencies and nongovernmental organizations have begun to work more closely together to address the management of invasive species. In 2005, approximately 500M dollars was budgeted by U.S. Federal Agencies for the management of invasive species. Despite extensive expenditures, most of the methods used to detect and quantify the distribution of these invaders are ad hoc, at best. Likewise, decisions on the type of management techniques to be used or evaluation of the success of these methods are typically non-systematic. More efficient methods to detect or predict the occurrence of these species, as well as the incorporation of this knowledge into decision support systems, are greatly needed. In this project, rapid prototyping capabilities (RPC) are utilized for an invasive species application. More precisely, our recently developed analysis techniques for hyperspectral imagery are being prototyped for inclusion in the national Invasive Species Forecasting System (ISFS). The current ecological forecasting tools in ISFS will be compared to our hyperspectral-based invasives prediction algorithms to determine if/how the newer algorithms enhance the performance of ISFS. The PIs have researched the use of remotely sensed multispectral and hyperspectral reflectance data for the detection of invasive vegetative species. As a result, the PI has designed, implemented, and benchmarked various target detection systems that utilize remotely sensed data. These systems have been designed to make decisions based on a variety of remotely sensed data, including high spectral/spatial resolution hyperspectral signatures (1000's of spectral bands, such as those measured using ASD handheld devices), moderate spectral/spatial resolution hyperspectral images (100's of spectral bands, such as Hyperion imagery), and low spectral/spatial resolution images (such as MODIS imagery). These algorithms include hyperspectral exploitation methods such as stepwise-LDA band selection, optimized spectral band grouping, and stepwise PCA component selection. The PIs have extensive experience with combining these recently- developed methods with conventional classifiers to form an end-to-end automated target recognition (ATR) system for detecting invasive species. The outputs of these systems can be invasive prediction maps, as well as quantitative accuracy assessments like confusion matrices, user accuracies, and producer accuracies. For all of these research endeavors, the PIs have developed numerous advanced signal and image processing methodologies, as well a suite of associated software modules. However, the use of the prototype software modules has been primarily contained to Mississippi State University. The project described in this presentation and paper will enable future systematic inclusion of these software modules into a DSS with national scope.
Ma, Tao; Wang, Fen; Cheng, Jianjun; Yu, Yang; Chen, Xiaoyun
2016-01-01
The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks. PMID:27754380
Ma, Tao; Wang, Fen; Cheng, Jianjun; Yu, Yang; Chen, Xiaoyun
2016-10-13
The development of intrusion detection systems (IDS) that are adapted to allow routers and network defence systems to detect malicious network traffic disguised as network protocols or normal access is a critical challenge. This paper proposes a novel approach called SCDNN, which combines spectral clustering (SC) and deep neural network (DNN) algorithms. First, the dataset is divided into k subsets based on sample similarity using cluster centres, as in SC. Next, the distance between data points in a testing set and the training set is measured based on similarity features and is fed into the deep neural network algorithm for intrusion detection. Six KDD-Cup99 and NSL-KDD datasets and a sensor network dataset were employed to test the performance of the model. These experimental results indicate that the SCDNN classifier not only performs better than backpropagation neural network (BPNN), support vector machine (SVM), random forest (RF) and Bayes tree models in detection accuracy and the types of abnormal attacks found. It also provides an effective tool of study and analysis of intrusion detection in large networks.
Matched-filter algorithm for subpixel spectral detection in hyperspectral image data
NASA Astrophysics Data System (ADS)
Borough, Howard C.
1991-11-01
Hyperspectral imagery, spatial imagery with associated wavelength data for every pixel, offers a significant potential for improved detection and identification of certain classes of targets. The ability to make spectral identifications of objects which only partially fill a single pixel (due to range or small size) is of considerable interest. Multiband imagery such as Landsat's 5 and 7 band imagery has demonstrated significant utility in the past. Hyperspectral imaging systems with hundreds of spectral bands offer improved performance. To explore the application of differentpixel spectral detection algorithms a synthesized set of hyperspectral image data (hypercubes) was generated utilizing NASA earth resources and other spectral data. The data was modified using LOWTRAN 7 to model the illumination, atmospheric contributions, attenuations and viewing geometry to represent a nadir view from 10,000 ft. altitude. The base hypercube (HC) represented 16 by 21 spatial pixels with 101 wavelength samples from 0.5 to 2.5 micrometers for each pixel. Insertions were made into the base data to provide random location, random pixel percentage, and random material. Fifteen different hypercubes were generated for blind testing of candidate algorithms. An algorithm utilizing a matched filter in the spectral dimension proved surprisingly good yielding 100% detections for pixels filled greater than 40% with a standard camouflage paint, and a 50% probability of detection for pixels filled 20% with the paint, with no false alarms. The false alarm rate as a function of the number of spectral bands in the range from 101 to 12 bands was measured and found to increase from zero to 50% illustrating the value of a large number of spectral bands. This test was on imagery without system noise; the next step is to incorporate typical system noise sources.
NASA Astrophysics Data System (ADS)
Barnhart, B. L.; Eichinger, W. E.; Prueger, J. H.
2010-12-01
Hilbert-Huang transform (HHT) is a relatively new data analysis tool which is used to analyze nonstationary and nonlinear time series data. It consists of an algorithm, called empirical mode decomposition (EMD), which extracts the cyclic components embedded within time series data, as well as Hilbert spectral analysis (HSA) which displays the time and frequency dependent energy contributions from each component in the form of a spectrogram. The method can be considered a generalized form of Fourier analysis which can describe the intrinsic cycles of data with basis functions whose amplitudes and phases may vary with time. The HHT will be introduced and compared to current spectral analysis tools such as Fourier analysis, short-time Fourier analysis, wavelet analysis and Wigner-Ville distributions. A number of applications are also presented which demonstrate the strengths and limitations of the tool, including analyzing sunspot number variability and total solar irradiance proxies as well as global averaged temperature and carbon dioxide concentration. Also, near-surface atmospheric quantities such as temperature and wind velocity are analyzed to demonstrate the nonstationarity of the atmosphere.
Collewet, Guylaine; Moussaoui, Saïd; Deligny, Cécile; Lucas, Tiphaine; Idier, Jérôme
2018-06-01
Multi-tissue partial volume estimation in MRI images is investigated with a viewpoint related to spectral unmixing as used in hyperspectral imaging. The main contribution of this paper is twofold. It firstly proposes a theoretical analysis of the statistical optimality conditions of the proportion estimation problem, which in the context of multi-contrast MRI data acquisition allows to appropriately set the imaging sequence parameters. Secondly, an efficient proportion quantification algorithm based on the minimisation of a penalised least-square criterion incorporating a regularity constraint on the spatial distribution of the proportions is proposed. Furthermore, the resulting developments are discussed using empirical simulations. The practical usefulness of the spectral unmixing approach for partial volume quantification in MRI is illustrated through an application to food analysis on the proving of a Danish pastry. Copyright © 2018 Elsevier Inc. All rights reserved.
Recent advances in the modeling of plasmas with the Particle-In-Cell methods
NASA Astrophysics Data System (ADS)
Vay, Jean-Luc; Lehe, Remi; Vincenti, Henri; Godfrey, Brendan; Lee, Patrick; Haber, Irv
2015-11-01
The Particle-In-Cell (PIC) approach is the method of choice for self-consistent simulations of plasmas from first principles. The fundamentals of the PIC method were established decades ago but improvements or variations are continuously being proposed. We report on several recent advances in PIC related algorithms, including: (a) detailed analysis of the numerical Cherenkov instability and its remediation, (b) analytic pseudo-spectral electromagnetic solvers in Cartesian and cylindrical (with azimuthal modes decomposition) geometries, (c) arbitrary-order finite-difference and generalized pseudo-spectral Maxwell solvers, (d) novel analysis of Maxwell's solvers' stencil variation and truncation, in application to domain decomposition strategies and implementation of Perfectly Matched Layers in high-order and pseudo-spectral solvers. Work supported by US-DOE Contracts DE-AC02-05CH11231 and the US-DOE SciDAC program ComPASS. Used resources of NERSC, supported by US-DOE Contract DE-AC02-05CH11231.
NASA Astrophysics Data System (ADS)
Zhang, Min; Gong, Zhaoning; Zhao, Wenji; Pu, Ruiliang; Liu, Ke
2016-01-01
Mapping vegetation abundance by using remote sensing data is an efficient means for detecting changes of an eco-environment. With Landsat-8 operational land imager (OLI) imagery acquired on July 31, 2013, both linear spectral mixture analysis (LSMA) and multinomial logit model (MNLM) methods were applied to estimate and assess the vegetation abundance in the Wild Duck Lake Wetland in Beijing, China. To improve mapping vegetation abundance and increase the number of endmembers in spectral mixture analysis, normalized difference vegetation index was extracted from OLI imagery along with the seven reflective bands of OLI data for estimating the vegetation abundance. Five endmembers were selected, which include terrestrial plants, aquatic plants, bare soil, high albedo, and low albedo. The vegetation abundance mapping results from Landsat OLI data were finally evaluated by utilizing a WorldView-2 multispectral imagery. Similar spatial patterns of vegetation abundance produced by both fully constrained LSMA algorithm and MNLM methods were observed: higher vegetation abundance levels were distributed in agricultural and riparian areas while lower levels in urban/built-up areas. The experimental results also indicate that the MNLM model outperformed the LSMA algorithm with smaller root mean square error (0.0152 versus 0.0252) and higher coefficient of determination (0.7856 versus 0.7214) as the MNLM model could handle the nonlinear reflection phenomenon better than the LSMA with mixed pixels.
Adiabatic Quantum Search in Open Systems.
Wild, Dominik S; Gopalakrishnan, Sarang; Knap, Michael; Yao, Norman Y; Lukin, Mikhail D
2016-10-07
Adiabatic quantum algorithms represent a promising approach to universal quantum computation. In isolated systems, a key limitation to such algorithms is the presence of avoided level crossings, where gaps become extremely small. In open quantum systems, the fundamental robustness of adiabatic algorithms remains unresolved. Here, we study the dynamics near an avoided level crossing associated with the adiabatic quantum search algorithm, when the system is coupled to a generic environment. At zero temperature, we find that the algorithm remains scalable provided the noise spectral density of the environment decays sufficiently fast at low frequencies. By contrast, higher order scattering processes render the algorithm inefficient at any finite temperature regardless of the spectral density, implying that no quantum speedup can be achieved. Extensions and implications for other adiabatic quantum algorithms will be discussed.
Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm
NASA Technical Reports Server (NTRS)
Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin
1994-01-01
The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.
A three-dimensional spectral algorithm for simulations of transition and turbulence
NASA Technical Reports Server (NTRS)
Zang, T. A.; Hussaini, M. Y.
1985-01-01
A spectral algorithm for simulating three dimensional, incompressible, parallel shear flows is described. It applies to the channel, to the parallel boundary layer, and to other shear flows with one wall bounded and two periodic directions. Representative applications to the channel and to the heated boundary layer are presented.
Land, P E; Haigh, J D
1997-12-20
In algorithms for the atmospheric correction of visible and near-IR satellite observations of the Earth's surface, it is generally assumed that the spectral variation of aerosol optical depth is characterized by an Angström power law or similar dependence. In an iterative fitting algorithm for atmospheric correction of ocean color imagery over case 2 waters, this assumption leads to an inability to retrieve the aerosol type and to the attribution to aerosol spectral variations of spectral effects actually caused by the water contents. An improvement to this algorithm is described in which the spectral variation of optical depth is calculated as a function of aerosol type and relative humidity, and an attempt is made to retrieve the relative humidity in addition to aerosol type. The aerosol is treated as a mixture of aerosol components (e.g., soot), rather than of aerosol types (e.g., urban). We demonstrate the improvement over the previous method by using simulated case 1 and case 2 sea-viewing wide field-of-view sensor data, although the retrieval of relative humidity was not successful.
Spectral Anonymization of Data
Lasko, Thomas A.; Vinterbo, Staal A.
2011-01-01
The goal of data anonymization is to allow the release of scientifically useful data in a form that protects the privacy of its subjects. This requires more than simply removing personal identifiers from the data, because an attacker can still use auxiliary information to infer sensitive individual information. Additional perturbation is necessary to prevent these inferences, and the challenge is to perturb the data in a way that preserves its analytic utility. No existing anonymization algorithm provides both perfect privacy protection and perfect analytic utility. We make the new observation that anonymization algorithms are not required to operate in the original vector-space basis of the data, and many algorithms can be improved by operating in a judiciously chosen alternate basis. A spectral basis derived from the data’s eigenvectors is one that can provide substantial improvement. We introduce the term spectral anonymization to refer to an algorithm that uses a spectral basis for anonymization, and we give two illustrative examples. We also propose new measures of privacy protection that are more general and more informative than existing measures, and a principled reference standard with which to define adequate privacy protection. PMID:21373375
Hazardous gas detection for FTIR-based hyperspectral imaging system using DNN and CNN
NASA Astrophysics Data System (ADS)
Kim, Yong Chan; Yu, Hyeong-Geun; Lee, Jae-Hoon; Park, Dong-Jo; Nam, Hyun-Woo
2017-10-01
Recently, a hyperspectral imaging system (HIS) with a Fourier Transform InfraRed (FTIR) spectrometer has been widely used due to its strengths in detecting gaseous fumes. Even though numerous algorithms for detecting gaseous fumes have already been studied, it is still difficult to detect target gases properly because of atmospheric interference substances and unclear characteristics of low concentration gases. In this paper, we propose detection algorithms for classifying hazardous gases using a deep neural network (DNN) and a convolutional neural network (CNN). In both the DNN and CNN, spectral signal preprocessing, e.g., offset, noise, and baseline removal, are carried out. In the DNN algorithm, the preprocessed spectral signals are used as feature maps of the DNN with five layers, and it is trained by a stochastic gradient descent (SGD) algorithm (50 batch size) and dropout regularization (0.7 ratio). In the CNN algorithm, preprocessed spectral signals are trained with 1 × 3 convolution layers and 1 × 2 max-pooling layers. As a result, the proposed algorithms improve the classification accuracy rate by 1.5% over the existing support vector machine (SVM) algorithm for detecting and classifying hazardous gases.
Spectral multigrid methods for the solution of homogeneous turbulence problems
NASA Technical Reports Server (NTRS)
Erlebacher, G.; Zang, T. A.; Hussaini, M. Y.
1987-01-01
New three-dimensional spectral multigrid algorithms are analyzed and implemented to solve the variable coefficient Helmholtz equation. Periodicity is assumed in all three directions which leads to a Fourier collocation representation. Convergence rates are theoretically predicted and confirmed through numerical tests. Residual averaging results in a spectral radius of 0.2 for the variable coefficient Poisson equation. In general, non-stationary Richardson must be used for the Helmholtz equation. The algorithms developed are applied to the large-eddy simulation of incompressible isotropic turbulence.
2011-04-01
Sensitive Dual Color In Vivo Bioluminescence Imaging Using a New Red Codon Optimized Firefly Luciferase and a Green Click Beetle Luciferase Laura...20 nm). Spectral unmixing algorithms were applied to the images where good separation of signals was observed. Furthermore, HEK293 cells that...spectral emissions using a suitable spectral unmixing algorithm . This new D-luciferin-dependent reporter gene couplet opens up the possibility in the future
NASA Astrophysics Data System (ADS)
Das, Ranabir; Kumar, Anil
2004-10-01
Quantum information processing has been effectively demonstrated on a small number of qubits by nuclear magnetic resonance. An important subroutine in any computing is the readout of the output. "Spectral implementation" originally suggested by Z. L. Madi, R. Bruschweiler, and R. R. Ernst [J. Chem. Phys. 109, 10603 (1999)], provides an elegant method of readout with the use of an extra "observer" qubit. At the end of computation, detection of the observer qubit provides the output via the multiplet structure of its spectrum. In spectral implementation by two-dimensional experiment the observer qubit retains the memory of input state during computation, thereby providing correlated information on input and output, in the same spectrum. Spectral implementation of Grover's search algorithm, approximate quantum counting, a modified version of Berstein-Vazirani problem, and Hogg's algorithm are demonstrated here in three- and four-qubit systems.
Zhang, Shang; Dong, Yuhan; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin
2018-02-22
The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer.
Zhang, Shang; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin
2018-01-01
The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer. PMID:29470406
Cao, Mingshu; Fraser, Karl; Rasmussen, Susanne
2013-10-31
Mass spectrometry coupled with chromatography has become the major technical platform in metabolomics. Aided by peak detection algorithms, the detected signals are characterized by mass-over-charge ratio (m/z) and retention time. Chemical identities often remain elusive for the majority of the signals. Multi-stage mass spectrometry based on electrospray ionization (ESI) allows collision-induced dissociation (CID) fragmentation of selected precursor ions. These fragment ions can assist in structural inference for metabolites of low molecular weight. Computational investigations of fragmentation spectra have increasingly received attention in metabolomics and various public databases house such data. We have developed an R package "iontree" that can capture, store and analyze MS2 and MS3 mass spectral data from high throughput metabolomics experiments. The package includes functions for ion tree construction, an algorithm (distMS2) for MS2 spectral comparison, and tools for building platform-independent ion tree (MS2/MS3) libraries. We have demonstrated the utilization of the package for the systematic analysis and annotation of fragmentation spectra collected in various metabolomics platforms, including direct infusion mass spectrometry, and liquid chromatography coupled with either low resolution or high resolution mass spectrometry. Assisted by the developed computational tools, we have demonstrated that spectral trees can provide informative evidence complementary to retention time and accurate mass to aid with annotating unknown peaks. These experimental spectral trees once subjected to a quality control process, can be used for querying public MS2 databases or de novo interpretation. The putatively annotated spectral trees can be readily incorporated into reference libraries for routine identification of metabolites.
Narayanan, Ram M; Pooler, Richard K; Martone, Anthony F; Gallagher, Kyle A; Sherbondy, Kelly D
2018-02-22
This paper describes a multichannel super-heterodyne signal analyzer, called the Spectrum Analysis Solution (SAS), which performs multi-purpose spectrum sensing to support spectrally adaptive and cognitive radar applications. The SAS operates from ultrahigh frequency (UHF) to the S-band and features a wideband channel with eight narrowband channels. The wideband channel acts as a monitoring channel that can be used to tune the instantaneous band of the narrowband channels to areas of interest in the spectrum. The data collected from the SAS has been utilized to develop spectrum sensing algorithms for the budding field of spectrum sharing (SS) radar. Bandwidth (BW), average total power, percent occupancy (PO), signal-to-interference-plus-noise ratio (SINR), and power spectral entropy (PSE) have been examined as metrics for the characterization of the spectrum. These metrics are utilized to determine a contiguous optimal sub-band (OSB) for a SS radar transmission in a given spectrum for different modalities. Three OSB algorithms are presented and evaluated: the spectrum sensing multi objective (SS-MO), the spectrum sensing with brute force PSE (SS-BFE), and the spectrum sensing multi-objective with brute force PSE (SS-MO-BFE).
Pooler, Richard K.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.
2018-01-01
This paper describes a multichannel super-heterodyne signal analyzer, called the Spectrum Analysis Solution (SAS), which performs multi-purpose spectrum sensing to support spectrally adaptive and cognitive radar applications. The SAS operates from ultrahigh frequency (UHF) to the S-band and features a wideband channel with eight narrowband channels. The wideband channel acts as a monitoring channel that can be used to tune the instantaneous band of the narrowband channels to areas of interest in the spectrum. The data collected from the SAS has been utilized to develop spectrum sensing algorithms for the budding field of spectrum sharing (SS) radar. Bandwidth (BW), average total power, percent occupancy (PO), signal-to-interference-plus-noise ratio (SINR), and power spectral entropy (PSE) have been examined as metrics for the characterization of the spectrum. These metrics are utilized to determine a contiguous optimal sub-band (OSB) for a SS radar transmission in a given spectrum for different modalities. Three OSB algorithms are presented and evaluated: the spectrum sensing multi objective (SS-MO), the spectrum sensing with brute force PSE (SS-BFE), and the spectrum sensing multi-objective with brute force PSE (SS-MO-BFE). PMID:29470448
Surface emissivity and temperature retrieval for a hyperspectral sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borel, C.C.
1998-12-01
With the growing use of hyper-spectral imagers, e.g., AVIRIS in the visible and short-wave infrared there is hope of using such instruments in the mid-wave and thermal IR (TIR) some day. The author believes that this will enable him to get around using the present temperature-emissivity separation algorithms using methods which take advantage of the many channels available in hyper-spectral imagers. A simple fact used in coming up with a novel algorithm is that a typical surface emissivity spectrum are rather smooth compared to spectral features introduced by the atmosphere. Thus, a iterative solution technique can be devised which retrievesmore » emissivity spectra based on spectral smoothness. To make the emissivities realistic, atmospheric parameters are varied using approximations, look-up tables derived from a radiative transfer code and spectral libraries. One such iterative algorithm solves the radiative transfer equation for the radiance at the sensor for the unknown emissivity and uses the blackbody temperature computed in an atmospheric window to get a guess for the unknown surface temperature. By varying the surface temperature over a small range a series of emissivity spectra are calculated. The one with the smoothest characteristic is chosen. The algorithm was tested on synthetic data using MODTRAN and the Salisbury emissivity database.« less
Worm Algorithm simulations of the hole dynamics in the t-J model
NASA Astrophysics Data System (ADS)
Prokof'ev, Nikolai; Ruebenacker, Oliver
2001-03-01
In the limit of small J << t, relevant for HTSC materials and Mott-Hubbard systems, computer simulations have to be performed for large systems and at low temperatures. Despite convincing evidence against spin-charge separation obtained by various methods for J > 0.4t there is an ongoing argument that at smaller J spin-charge separation is still possible. Worm algorithm Monte Carlo simulations of the hole Green function for 0.1 < J/t < 0.4 were performed on lattices with up to 32x32 sites, and at temperature J/T = 40 (for the largest size). Spectral analysis reveals a single, delta-function sharp quasiparticle peak at the lowest edge of the spectrum and two distinct peaks above it at all studied J. We rule out the possibility of spin-charge separation in this parameter range, and present, apparently, the hole spectral function in the thermodynamic limit.
NASA Astrophysics Data System (ADS)
Zakharchenko, V. D.; Kovalenko, I. G.
2014-05-01
A new method for the line-of-sight velocity estimation of a high-speed near-Earth object (asteroid, meteorite) is suggested. The method is based on the use of fractional, one-half order derivative of a Doppler signal. The algorithm suggested is much simpler and more economical than the classical one, and it appears preferable for use in orbital weapon systems of threat response. Application of fractional differentiation to quick evaluation of mean frequency location of the reflected Doppler signal is justified. The method allows an assessment of the mean frequency in the time domain without spectral analysis. An algorithm structure for the real-time estimation is presented. The velocity resolution estimates are made for typical asteroids in the X-band. It is shown that the wait time can be shortened by orders of magnitude compared with similar value in the case of a standard spectral processing.
Multi-species Identification of Polymorphic Peptide Variants via Propagation in Spectral Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Na, Seungjin; Payne, Samuel H.; Bandeira, Nuno
The spectral networks approach enables the detection of pairs of spectra from related peptides and thus allows for the propagation of annotations from identified peptides to unidentified spectra. Beyond allowing for unbiased discovery of unexpected post-translational modifications, spectral networks are also applicable to multi-species comparative proteomics or metaproteomics to identify numerous orthologous versions of a protein. We present algorithmic and statistical advances in spectral networks that have made it possible to rigorously assess the statistical significance of spectral pairs and accurately estimate the error rate of identifications via propagation. In the analysis of three related Cyanothece species, a model organismmore » for biohydrogen production, spectral networks identified peptides with highly divergent sequences with up to dozens of variants per peptide, including many novel peptides in species that lack a sequenced genome. Furthermore, spectral networks strongly suggested the presence of novel peptides even in genomically characterized species (i.e. missing from databases) in that a significant portion of unidentified multi-species networks included at least two polymorphic peptide variants.« less
NASA Technical Reports Server (NTRS)
Gao, Bo-Cai; Montes, Marcos J.; Davis, Curtiss O.
2003-01-01
This SIMBIOS contract supports several activities over its three-year time-span. These include certain computational aspects of atmospheric correction, including the modification of our hyperspectral atmospheric correction algorithm Tafkaa for various multi-spectral instruments, such as SeaWiFS, MODIS, and GLI. Additionally, since absorbing aerosols are becoming common in many coastal areas, we are making the model calculations to incorporate various absorbing aerosol models into tables used by our Tafkaa atmospheric correction algorithm. Finally, we have developed the algorithms to use MODIS data to characterize thin cirrus effects on aerosol retrieval.
Spectral methods to detect surface mines
NASA Astrophysics Data System (ADS)
Winter, Edwin M.; Schatten Silvious, Miranda
2008-04-01
Over the past five years, advances have been made in the spectral detection of surface mines under minefield detection programs at the U. S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD). The problem of detecting surface land mines ranges from the relatively simple, the detection of large anti-vehicle mines on bare soil, to the very difficult, the detection of anti-personnel mines in thick vegetation. While spatial and spectral approaches can be applied to the detection of surface mines, spatial-only detection requires many pixels-on-target such that the mine is actually imaged and shape-based features can be exploited. This method is unreliable in vegetated areas because only part of the mine may be exposed, while spectral detection is possible without the mine being resolved. At NVESD, hyperspectral and multi-spectral sensors throughout the reflection and thermal spectral regimes have been applied to the mine detection problem. Data has been collected on mines in forest and desert regions and algorithms have been developed both to detect the mines as anomalies and to detect the mines based on their spectral signature. In addition to the detection of individual mines, algorithms have been developed to exploit the similarities of mines in a minefield to improve their detection probability. In this paper, the types of spectral data collected over the past five years will be summarized along with the advances in algorithm development.
An Introduction to Computational Physics - 2nd Edition
NASA Astrophysics Data System (ADS)
Pang, Tao
2006-01-01
Preface to first edition; Preface; Acknowledgements; 1. Introduction; 2. Approximation of a function; 3. Numerical calculus; 4. Ordinary differential equations; 5. Numerical methods for matrices; 6. Spectral analysis; 7. Partial differential equations; 8. Molecular dynamics simulations; 9. Modeling continuous systems; 10. Monte Carlo simulations; 11. Genetic algorithm and programming; 12. Numerical renormalization; References; Index.
Techniques for the Analysis of Spectral and Orbital Congestion in Space Systems.
1984-03-01
Appendix 29 gives the appropriate equations ... .. - 87 - for the two cases, and provides algorithms for polarization isolation, I topocentric and geocentric ...The PDP form is maintained by MITRE Dept. D97, which provides services to run the program when staffing permits. NASA Lewis has used the results in a
Demosaicking for full motion video 9-band SWIR sensor
NASA Astrophysics Data System (ADS)
Kanaev, Andrey V.; Rawhouser, Marjorie; Kutteruf, Mary R.; Yetzbacher, Michael K.; DePrenger, Michael J.; Novak, Kyle M.; Miller, Corey A.; Miller, Christopher W.
2014-05-01
Short wave infrared (SWIR) spectral imaging systems are vital for Intelligence, Surveillance, and Reconnaissance (ISR) applications because of their abilities to autonomously detect targets and classify materials. Typically the spectral imagers are incapable of providing Full Motion Video (FMV) because of their reliance on line scanning. We enable FMV capability for a SWIR multi-spectral camera by creating a repeating pattern of 3x3 spectral filters on a staring focal plane array (FPA). In this paper we present the imagery from an FMV SWIR camera with nine discrete bands and discuss image processing algorithms necessary for its operation. The main task of image processing in this case is demosaicking of the spectral bands i.e. reconstructing full spectral images with original FPA resolution from spatially subsampled and incomplete spectral data acquired with the choice of filter array pattern. To the best of author's knowledge, the demosaicking algorithms for nine or more equally sampled bands have not been reported before. Moreover all existing algorithms developed for demosaicking visible color filter arrays with less than nine colors assume either certain relationship between the visible colors, which are not valid for SWIR imaging, or presence of one color band with higher sampling rate compared to the rest of the bands, which does not conform to our spectral filter pattern. We will discuss and present results for two novel approaches to demosaicking: interpolation using multi-band edge information and application of multi-frame super-resolution to a single frame resolution enhancement of multi-spectral spatially multiplexed images.
Validating clustering of molecular dynamics simulations using polymer models.
Phillips, Joshua L; Colvin, Michael E; Newsam, Shawn
2011-11-14
Molecular dynamics (MD) simulation is a powerful technique for sampling the meta-stable and transitional conformations of proteins and other biomolecules. Computational data clustering has emerged as a useful, automated technique for extracting conformational states from MD simulation data. Despite extensive application, relatively little work has been done to determine if the clustering algorithms are actually extracting useful information. A primary goal of this paper therefore is to provide such an understanding through a detailed analysis of data clustering applied to a series of increasingly complex biopolymer models. We develop a novel series of models using basic polymer theory that have intuitive, clearly-defined dynamics and exhibit the essential properties that we are seeking to identify in MD simulations of real biomolecules. We then apply spectral clustering, an algorithm particularly well-suited for clustering polymer structures, to our models and MD simulations of several intrinsically disordered proteins. Clustering results for the polymer models provide clear evidence that the meta-stable and transitional conformations are detected by the algorithm. The results for the polymer models also help guide the analysis of the disordered protein simulations by comparing and contrasting the statistical properties of the extracted clusters. We have developed a framework for validating the performance and utility of clustering algorithms for studying molecular biopolymer simulations that utilizes several analytic and dynamic polymer models which exhibit well-behaved dynamics including: meta-stable states, transition states, helical structures, and stochastic dynamics. We show that spectral clustering is robust to anomalies introduced by structural alignment and that different structural classes of intrinsically disordered proteins can be reliably discriminated from the clustering results. To our knowledge, our framework is the first to utilize model polymers to rigorously test the utility of clustering algorithms for studying biopolymers.
Validating clustering of molecular dynamics simulations using polymer models
2011-01-01
Background Molecular dynamics (MD) simulation is a powerful technique for sampling the meta-stable and transitional conformations of proteins and other biomolecules. Computational data clustering has emerged as a useful, automated technique for extracting conformational states from MD simulation data. Despite extensive application, relatively little work has been done to determine if the clustering algorithms are actually extracting useful information. A primary goal of this paper therefore is to provide such an understanding through a detailed analysis of data clustering applied to a series of increasingly complex biopolymer models. Results We develop a novel series of models using basic polymer theory that have intuitive, clearly-defined dynamics and exhibit the essential properties that we are seeking to identify in MD simulations of real biomolecules. We then apply spectral clustering, an algorithm particularly well-suited for clustering polymer structures, to our models and MD simulations of several intrinsically disordered proteins. Clustering results for the polymer models provide clear evidence that the meta-stable and transitional conformations are detected by the algorithm. The results for the polymer models also help guide the analysis of the disordered protein simulations by comparing and contrasting the statistical properties of the extracted clusters. Conclusions We have developed a framework for validating the performance and utility of clustering algorithms for studying molecular biopolymer simulations that utilizes several analytic and dynamic polymer models which exhibit well-behaved dynamics including: meta-stable states, transition states, helical structures, and stochastic dynamics. We show that spectral clustering is robust to anomalies introduced by structural alignment and that different structural classes of intrinsically disordered proteins can be reliably discriminated from the clustering results. To our knowledge, our framework is the first to utilize model polymers to rigorously test the utility of clustering algorithms for studying biopolymers. PMID:22082218
GIFTS SM EDU Level 1B Algorithms
NASA Technical Reports Server (NTRS)
Tian, Jialin; Gazarik, Michael J.; Reisse, Robert A.; Johnson, David G.
2007-01-01
The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) SensorModule (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the GIFTS SM EDU Level 1B algorithms involved in the calibration. The GIFTS Level 1B calibration procedures can be subdivided into four blocks. In the first block, the measured raw interferograms are first corrected for the detector nonlinearity distortion, followed by the complex filtering and decimation procedure. In the second block, a phase correction algorithm is applied to the filtered and decimated complex interferograms. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected spectrum. The phase correction and spectral smoothing operations are performed on a set of interferogram scans for both ambient and hot blackbody references. To continue with the calibration, we compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. We now can estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. The correction schemes that compensate for the fore-optics offsets and off-axis effects are also implemented. In the third block, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is designed based on the pixel performance evaluation. Finally, in the fourth block, the single pixel algorithms are applied to the entire FPA.
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1998-01-01
This project is about the development of high order, non-oscillatory type schemes for computational fluid dynamics. Algorithm analysis, implementation, and applications are performed. Collaborations with NASA scientists have been carried out to ensure that the research is relevant to NASA objectives. The combination of ENO finite difference method with spectral method in two space dimension is considered, jointly with Cai [3]. The resulting scheme behaves nicely for the two dimensional test problems with or without shocks. Jointly with Cai and Gottlieb, we have also considered one-sided filters for spectral approximations to discontinuous functions [2]. We proved theoretically the existence of filters to recover spectral accuracy up to the discontinuity. We also constructed such filters for practical calculations.
Infrared micro-spectral imaging: distinction of tissue types in axillary lymph node histology
Bird, Benjamin; Miljkovic, Milos; Romeo, Melissa J; Smith, Jennifer; Stone, Nicholas; George, Michael W; Diem, Max
2008-01-01
Background Histopathologic evaluation of surgical specimens is a well established technique for disease identification, and has remained relatively unchanged since its clinical introduction. Although it is essential for clinical investigation, histopathologic identification of tissues remains a time consuming and subjective technique, with unsatisfactory levels of inter- and intra-observer discrepancy. A novel approach for histological recognition is to use Fourier Transform Infrared (FT-IR) micro-spectroscopy. This non-destructive optical technique can provide a rapid measurement of sample biochemistry and identify variations that occur between healthy and diseased tissues. The advantage of this method is that it is objective and provides reproducible diagnosis, independent of fatigue, experience and inter-observer variability. Methods We report a method for analysing excised lymph nodes that is based on spectral pathology. In spectral pathology, an unstained (fixed or snap frozen) tissue section is interrogated by a beam of infrared light that samples pixels of 25 μm × 25 μm in size. This beam is rastered over the sample, and up to 100,000 complete infrared spectra are acquired for a given tissue sample. These spectra are subsequently analysed by a diagnostic computer algorithm that is trained by correlating spectral and histopathological features. Results We illustrate the ability of infrared micro-spectral imaging, coupled with completely unsupervised methods of multivariate statistical analysis, to accurately reproduce the histological architecture of axillary lymph nodes. By correlating spectral and histopathological features, a diagnostic algorithm was trained that allowed both accurate and rapid classification of benign and malignant tissues composed within different lymph nodes. This approach was successfully applied to both deparaffinised and frozen tissues and indicates that both intra-operative and more conventional surgical specimens can be diagnosed by this technique. Conclusion This paper provides strong evidence that automated diagnosis by means of infrared micro-spectral imaging is possible. Recent investigations within the author's laboratory upon lymph nodes have also revealed that cancers from different primary tumours provide distinctly different spectral signatures. Thus poorly differentiated and hard-to-determine cases of metastatic invasion, such as micrometastases, may additionally be identified by this technique. Finally, we differentiate benign and malignant tissues composed within axillary lymph nodes by completely automated methods of spectral analysis. PMID:18759967
On Certain Theoretical Developments Underlying the Hilbert-Huang Transform
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Blank, Karin; Flatley, Thomas; Huang, Norden E.; Petrick, David; Hestness, Phyllis
2006-01-01
One of the main traditional tools used in scientific and engineering data spectral analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). Both carry strong a-priori assumptions about the source data, such as being linear and stationary, and of satisfying the Dirichlet conditions. A recent development at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), known as the Hilbert-Huang Transform (HHT), proposes a novel approach to the solution for the nonlinear class of spectral analysis problems. Using a-posteriori data processing based on the Empirical Mode Decomposition (EMD) sifting process (algorithm), followed by the normalized Hilbert Transform of the decomposed data, the HHT allows spectral analysis of nonlinear and nonstationary data. The EMD sifting process results in a non-constrained decomposition of a source real-value data vector into a finite set of Intrinsic Mode Functions (IMF). These functions form a nearly orthogonal derived from the data (adaptive) basis. The IMFs can be further analyzed for spectrum content by using the classical Hilbert Transform. A new engineering spectral analysis tool using HHT has been developed at NASA GSFC, the HHT Data Processing System (HHT-DPS). As the HHT-DPS has been successfully used and commercialized, new applications pose additional questions about the theoretical basis behind the HHT and EMD algorithms. Why is the fastest changing component of a composite signal being sifted out first in the EMD sifting process? Why does the EMD sifting process seemingly converge and why does it converge rapidly? Does an IMF have a distinctive structure? Why are the IMFs nearly orthogonal? We address these questions and develop the initial theoretical background for the HHT. This will contribute to the development of new HHT processing options, such as real-time and 2-D processing using Field Programmable Gate Array (FPGA) computational resources,
Kopriva, Ivica; Persin, Antun; Puizina-Ivić, Neira; Mirić, Lina
2010-07-02
This study was designed to demonstrate robust performance of the novel dependent component analysis (DCA)-based approach to demarcation of the basal cell carcinoma (BCC) through unsupervised decomposition of the red-green-blue (RGB) fluorescent image of the BCC. Robustness to intensity fluctuation is due to the scale invariance property of DCA algorithms, which exploit spectral and spatial diversities between the BCC and the surrounding tissue. Used filtering-based DCA approach represents an extension of the independent component analysis (ICA) and is necessary in order to account for statistical dependence that is induced by spectral similarity between the BCC and surrounding tissue. This generates weak edges what represents a challenge for other segmentation methods as well. By comparative performance analysis with state-of-the-art image segmentation methods such as active contours (level set), K-means clustering, non-negative matrix factorization, ICA and ratio imaging we experimentally demonstrate good performance of DCA-based BCC demarcation in two demanding scenarios where intensity of the fluorescent image has been varied almost two orders of magnitude. Copyright 2010 Elsevier B.V. All rights reserved.
Liu, Yongliang; Kim, Hee-Jin
2017-06-22
With cotton fiber growth or maturation, cellulose content in cotton fibers markedly increases. Traditional chemical methods have been developed to determine cellulose content, but it is time-consuming and labor-intensive, mostly owing to the slow hydrolysis process of fiber cellulose components. As one approach, the attenuated total reflection Fourier transform infrared (ATR FT-IR) spectroscopy technique has also been utilized to monitor cotton cellulose formation, by implementing various spectral interpretation strategies of both multivariate principal component analysis (PCA) and 1-, 2- or 3-band/-variable intensity or intensity ratios. The main objective of this study was to compare the correlations between cellulose content determined by chemical analysis and ATR FT-IR spectral indices acquired by the reported procedures, among developmental Texas Marker-1 (TM-1) and immature fiber ( im ) mutant cotton fibers. It was observed that the R value, CI IR , and the integrated intensity of the 895 cm -1 band exhibited strong and linear relationships with cellulose content. The results have demonstrated the suitability and utility of ATR FT-IR spectroscopy, combined with a simple algorithm analysis, in assessing cotton fiber cellulose content, maturity, and crystallinity in a manner which is rapid, routine, and non-destructive.
Villiger, Martin; Zhang, Ellen Ziyi; Nadkarni, Seemantini K.; Oh, Wang-Yuhl; Vakoc, Benjamin J.; Bouma, Brett E.
2013-01-01
Polarization mode dispersion (PMD) has been recognized as a significant barrier to sensitive and reproducible birefringence measurements with fiber-based, polarization-sensitive optical coherence tomography systems. Here, we present a signal processing strategy that reconstructs the local retardation robustly in the presence of system PMD. The algorithm uses a spectral binning approach to limit the detrimental impact of system PMD and benefits from the final averaging of the PMD-corrected retardation vectors of the spectral bins. The algorithm was validated with numerical simulations and experimental measurements of a rubber phantom. When applied to the imaging of human cadaveric coronary arteries, the algorithm was found to yield a substantial improvement in the reconstructed birefringence maps. PMID:23938487
Convex Accelerated Maximum Entropy Reconstruction
Worley, Bradley
2016-01-01
Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476
Fast algorithm for spectral processing with application to on-line welding quality assurance
NASA Astrophysics Data System (ADS)
Mirapeix, J.; Cobo, A.; Jaúregui, C.; López-Higuera, J. M.
2006-10-01
A new technique is presented in this paper for the analysis of welding process emission spectra to accurately estimate in real-time the plasma electronic temperature. The estimation of the electronic temperature of the plasma, through the analysis of the emission lines from multiple atomic species, may be used to monitor possible perturbations during the welding process. Unlike traditional techniques, which usually involve peak fitting to Voigt functions using the Levenberg-Marquardt recursive method, sub-pixel algorithms are used to more accurately estimate the central wavelength of the peaks. Three different sub-pixel algorithms will be analysed and compared, and it will be shown that the LPO (linear phase operator) sub-pixel algorithm is a better solution within the proposed system. Experimental tests during TIG-welding using a fibre optic to capture the arc light, together with a low cost CCD-based spectrometer, show that some typical defects associated with perturbations in the electron temperature can be easily detected and identified with this technique. A typical processing time for multiple peak analysis is less than 20 ms running on a conventional PC.
NASA Astrophysics Data System (ADS)
Xie, ChengJun; Xu, Lin
2008-03-01
This paper presents an algorithm based on mixing transform of wave band grouping to eliminate spectral redundancy, the algorithm adapts to the relativity difference between different frequency spectrum images, and still it works well when the band number is not the power of 2. Using non-boundary extension CDF(2,2)DWT and subtraction mixing transform to eliminate spectral redundancy, employing CDF(2,2)DWT to eliminate spatial redundancy and SPIHT+CABAC for compression coding, the experiment shows that a satisfied lossless compression result can be achieved. Using hyper-spectral image Canal of American JPL laboratory as the data set for lossless compression test, when the band number is not the power of 2, lossless compression result of this compression algorithm is much better than the results acquired by JPEG-LS, WinZip, ARJ, DPCM, the research achievements of a research team of Chinese Academy of Sciences, Minimum Spanning Tree and Near Minimum Spanning Tree, on the average the compression ratio of this algorithm exceeds the above algorithms by 41%,37%,35%,29%,16%,10%,8% respectively; when the band number is the power of 2, for 128 frames of the image Canal, taking 8, 16 and 32 respectively as the number of one group for groupings based on different numbers, considering factors like compression storage complexity, the type of wave band and the compression effect, we suggest using 8 as the number of bands included in one group to achieve a better compression effect. The algorithm of this paper has priority in operation speed and hardware realization convenience.
NASA Astrophysics Data System (ADS)
D'Amore, M.; Le Scaon, R.; Helbert, J.; Maturilli, A.
2017-12-01
Machine-learning achieved unprecedented results in high-dimensional data processing tasks with wide applications in various fields. Due to the growing number of complex nonlinear systems that have to be investigated in science and the bare raw size of data nowadays available, ML offers the unique ability to extract knowledge, regardless the specific application field. Examples are image segmentation, supervised/unsupervised/ semi-supervised classification, feature extraction, data dimensionality analysis/reduction.The MASCS instrument has mapped Mercury surface in the 400-1145 nm wavelength range during orbital observations by the MESSENGER spacecraft. We have conducted k-means unsupervised hierarchical clustering to identify and characterize spectral units from MASCS observations. The results display a dichotomy: a polar and equatorial units, possibly linked to compositional differences or weathering due to irradiation. To explore possible relations between composition and spectral behavior, we have compared the spectral provinces with elemental abundance maps derived from MESSENGER's X-Ray Spectrometer (XRS).For the Vesta application on DAWN Visible and infrared spectrometer (VIR) data, we explored several Machine Learning techniques: image segmentation method, stream algorithm and hierarchical clustering.The algorithm successfully separates the Olivine outcrops around two craters on Vesta's surface [1]. New maps summarizing the spectral and chemical signature of the surface could be automatically produced.We conclude that instead of hand digging in data, scientist could choose a subset of algorithms with well known feature (i.e. efficacy on the particular problem, speed, accuracy) and focus their effort in understanding what important characteristic of the groups found in the data mean. [1] E Ammannito et al. "Olivine in an unexpected location on Vesta's surface". In: Nature 504.7478 (2013), pp. 122-125.
NASA Astrophysics Data System (ADS)
Hu, Yaogai; Shen, Aiguo; Jiang, Tao; Ai, Yong; Hu, Jiming
2008-02-01
Thirty-two samples from the human gastric mucosa tissue, including 13 normal and 19 malignant tissue samples were measured by confocal Raman microspectroscopy. The low signal-to-background ratio spectra from human gastric mucosa tissues were obtained by this technique without any sample preparation. Raman spectral interferences include a broad featureless sloping background due to fluorescence and noise. They mask most Raman spectral feature and lead to problems with precision and quantitation of the original spectral information. A preprocessed algorithm based on wavelet analysis was used to reduce noise and eliminate background/baseline of Raman spectra. Comparing preprocessed spectra of malignant gastric mucosa tissues with those of counterpart normal ones, there were obvious spectral changes, including intensity increase at ˜1156 cm -1 and intensity decrease at ˜1587 cm -1. The quantitative criterion based upon the intensity ratio of the ˜1156 and ˜1587 cm -1 was extracted for classification of the normal and malignant gastric mucosa tissue samples. This could result in a new diagnostic method, which would assist the early diagnosis of gastric cancer.
Hyperspectral analysis of clay minerals
NASA Astrophysics Data System (ADS)
Janaki Rama Suresh, G.; Sreenivas, K.; Sivasamy, R.
2014-11-01
A study was carried out by collecting soil samples from parts of Gwalior and Shivpuri district, Madhya Pradesh in order to assess the dominant clay mineral of these soils using hyperspectral data, as 0.4 to 2.5 μm spectral range provides abundant and unique information about many important earth-surface minerals. Understanding the spectral response along with the soil chemical properties can provide important clues for retrieval of mineralogical soil properties. The soil samples were collected based on stratified random sampling approach and dominant clay minerals were identified through XRD analysis. The absorption feature parameters like depth, width, area and asymmetry of the absorption peaks were derived from spectral profile of soil samples through DISPEC tool. The derived absorption feature parameters were used as inputs for modelling the dominant soil clay mineral present in the unknown samples using Random forest approach which resulted in kappa accuracy of 0.795. Besides, an attempt was made to classify the Hyperion data using Spectral Angle Mapper (SAM) algorithm with an overall accuracy of 68.43 %. Results showed that kaolinite was the dominant mineral present in the soils followed by montmorillonite in the study area.
Multiwavelength FLIM: new applications and algorithms
NASA Astrophysics Data System (ADS)
Rück, A.; Strat, D.; Dolp, F.; von Einem, B.; von Arnim, C. A. F.
2011-03-01
The combination of time-resolved and spectral resolved techniques as achieved by SLIM (spectrally resolved fluorescence lifetime imaging) improves the analysis of complex situations, when different fluorophores have to be distinguished. This could be the case when endogenous fluorophores of living cells and tissues are observed to identify the redox state and oxidative metabolic changes of the mitochondria. Other examples are FRET (resonant energy transfer) measurements, when different donor/acceptor pairs are observed simultaneously. SLIM is working in the time domain employing excitation with short light pulses and detection of the fluorescence intensity decay in many cases with time-correlated single photon counting (TCSPC). Spectral resolved detection is achieved by a polychromator in the detection path and a 16-channel multianode photomultiplier tube with the appropriate routing electronics. Within this paper special attention will be focused on FRET measurements with respect to protein interactions in Alzheimers disease. Using global analysis as the phasor plot approach or integration of the kinetic equations taking into account the multidimensional datasets in every spectral channel we could demonstrate considerable improvement of our calculations.
NASA Astrophysics Data System (ADS)
Zhou, Xiran; Liu, Jun; Liu, Shuguang; Cao, Lei; Zhou, Qiming; Huang, Huawen
2014-02-01
High spatial resolution and spectral fidelity are basic standards for evaluating an image fusion algorithm. Numerous fusion methods for remote sensing images have been developed. Some of these methods are based on the intensity-hue-saturation (IHS) transform and the generalized IHS (GIHS), which may cause serious spectral distortion. Spectral distortion in the GIHS is proven to result from changes in saturation during fusion. Therefore, reducing such changes can achieve high spectral fidelity. A GIHS-based spectral preservation fusion method that can theoretically reduce spectral distortion is proposed in this study. The proposed algorithm consists of two steps. The first step is spectral modulation (SM), which uses the Gaussian function to extract spatial details and conduct SM of multispectral (MS) images. This method yields a desirable visual effect without requiring histogram matching between the panchromatic image and the intensity of the MS image. The second step uses the Gaussian convolution function to restore lost edge details during SM. The proposed method is proven effective and shown to provide better results compared with other GIHS-based methods.
Diagnosing breast cancer using Raman spectroscopy: prospective analysis
NASA Astrophysics Data System (ADS)
Haka, Abigail S.; Volynskaya, Zoya; Gardecki, Joseph A.; Nazemi, Jon; Shenk, Robert; Wang, Nancy; Dasari, Ramachandra R.; Fitzmaurice, Maryann; Feld, Michael S.
2009-09-01
We present the first prospective test of Raman spectroscopy in diagnosing normal, benign, and malignant human breast tissues. Prospective testing of spectral diagnostic algorithms allows clinicians to accurately assess the diagnostic information contained in, and any bias of, the spectroscopic measurement. In previous work, we developed an accurate, internally validated algorithm for breast cancer diagnosis based on analysis of Raman spectra acquired from fresh-frozen in vitro tissue samples. We currently evaluate the performance of this algorithm prospectively on a large ex vivo clinical data set that closely mimics the in vivo environment. Spectroscopic data were collected from freshly excised surgical specimens, and 129 tissue sites from 21 patients were examined. Prospective application of the algorithm to the clinical data set resulted in a sensitivity of 83%, a specificity of 93%, a positive predictive value of 36%, and a negative predictive value of 99% for distinguishing cancerous from normal and benign tissues. The performance of the algorithm in different patient populations is discussed. Sources of bias in the in vitro calibration and ex vivo prospective data sets, including disease prevalence and disease spectrum, are examined and analytical methods for comparison provided.
Method to analyze remotely sensed spectral data
Stork, Christopher L [Albuquerque, NM; Van Benthem, Mark H [Middletown, DE
2009-02-17
A fast and rigorous multivariate curve resolution (MCR) algorithm is applied to remotely sensed spectral data. The algorithm is applicable in the solar-reflective spectral region, comprising the visible to the shortwave infrared (ranging from approximately 0.4 to 2.5 .mu.m), midwave infrared, and thermal emission spectral region, comprising the thermal infrared (ranging from approximately 8 to 15 .mu.m). For example, employing minimal a priori knowledge, notably non-negativity constraints on the extracted endmember profiles and a constant abundance constraint for the atmospheric upwelling component, MCR can be used to successfully compensate thermal infrared hyperspectral images for atmospheric upwelling and, thereby, transmittance effects. Further, MCR can accurately estimate the relative spectral absorption coefficients and thermal contrast distribution of a gas plume component near the minimum detectable quantity.
Rocchini, Duccio
2009-01-01
Measuring heterogeneity in satellite imagery is an important task to deal with. Most measures of spectral diversity have been based on Shannon Information theory. However, this approach does not inherently address different scales, ranging from local (hereafter referred to alpha diversity) to global scales (gamma diversity). The aim of this paper is to propose a method for measuring spectral heterogeneity at multiple scales based on rarefaction curves. An algorithmic solution of rarefaction applied to image pixel values (Digital Numbers, DNs) is provided and discussed. PMID:22389600
NASA Technical Reports Server (NTRS)
Clark, Roger N.; Swayze, Gregg A.; Gallagher, Andrea
1992-01-01
The sedimentary sections exposed in the Canyonlands and Arches National Parks region of Utah (generally referred to as 'Canyonlands') consist of sandstones, shales, limestones, and conglomerates. Reflectance spectra of weathered surfaces of rocks from these areas show two components: (1) variations in spectrally detectable mineralogy, and (2) variations in the relative ratios of the absorption bands between minerals. Both types of information can be used together to map each major lithology and the Clark spectral features mapping algorithm is applied to do the job.
Knowledge Discovery in Spectral Data by Means of Complex Networks
Zanin, Massimiliano; Papo, David; Solís, José Luis González; Espinosa, Juan Carlos Martínez; Frausto-Reyes, Claudio; Anda, Pascual Palomares; Sevilla-Escoboza, Ricardo; Boccaletti, Stefano; Menasalvas, Ernestina; Sousa, Pedro
2013-01-01
In the last decade, complex networks have widely been applied to the study of many natural and man-made systems, and to the extraction of meaningful information from the interaction structures created by genes and proteins. Nevertheless, less attention has been devoted to metabonomics, due to the lack of a natural network representation of spectral data. Here we define a technique for reconstructing networks from spectral data sets, where nodes represent spectral bins, and pairs of them are connected when their intensities follow a pattern associated with a disease. The structural analysis of the resulting network can then be used to feed standard data-mining algorithms, for instance for the classification of new (unlabeled) subjects. Furthermore, we show how the structure of the network is resilient to the presence of external additive noise, and how it can be used to extract relevant knowledge about the development of the disease. PMID:24957895
Knowledge discovery in spectral data by means of complex networks.
Zanin, Massimiliano; Papo, David; Solís, José Luis González; Espinosa, Juan Carlos Martínez; Frausto-Reyes, Claudio; Anda, Pascual Palomares; Sevilla-Escoboza, Ricardo; Jaimes-Reategui, Rider; Boccaletti, Stefano; Menasalvas, Ernestina; Sousa, Pedro
2013-03-11
In the last decade, complex networks have widely been applied to the study of many natural and man-made systems, and to the extraction of meaningful information from the interaction structures created by genes and proteins. Nevertheless, less attention has been devoted to metabonomics, due to the lack of a natural network representation of spectral data. Here we define a technique for reconstructing networks from spectral data sets, where nodes represent spectral bins, and pairs of them are connected when their intensities follow a pattern associated with a disease. The structural analysis of the resulting network can then be used to feed standard data-mining algorithms, for instance for the classification of new (unlabeled) subjects. Furthermore, we show how the structure of the network is resilient to the presence of external additive noise, and how it can be used to extract relevant knowledge about the development of the disease.
Using fragmentation trees and mass spectral trees for identifying unknown compounds in metabolomics.
Vaniya, Arpana; Fiehn, Oliver
2015-06-01
Identification of unknown metabolites is the bottleneck in advancing metabolomics, leaving interpretation of metabolomics results ambiguous. The chemical diversity of metabolism is vast, making structure identification arduous and time consuming. Currently, comprehensive analysis of mass spectra in metabolomics is limited to library matching, but tandem mass spectral libraries are small compared to the large number of compounds found in the biosphere, including xenobiotics. Resolving this bottleneck requires richer data acquisition and better computational tools. Multi-stage mass spectrometry (MSn) trees show promise to aid in this regard. Fragmentation trees explore the fragmentation process, generate fragmentation rules and aid in sub-structure identification, while mass spectral trees delineate the dependencies in multi-stage MS of collision-induced dissociations. This review covers advancements over the past 10 years as a tool for metabolite identification, including algorithms, software and databases used to build and to implement fragmentation trees and mass spectral annotations.
[Research on the measurement of flue-dust concentration in Vis, IR spectral region].
Sun, Xiao-gang; Tang, Hong; Yuan, Gui-bin
2008-10-01
In the measurement of flue-dust concentration based on the transmission method, the dependent model algorithm was used to invert the flue-dust concentration in the visible, infrared and visible-infrared spectral regions respectively. By the analysis and comparison of the accuracy, linearity and sensitivity of the inversion flue-dust concentration, the optimal spectral region was determined. Meanwhile, the influence of the water droplet with different size distribution and volume concentration was simulated, and a method was proposed which has advantages of simplicity, rapidity, and suitability for on line measurement. Simulation experiments illustrate that the flue-dust concentration can be inverted very well in the visible-infrared spectral region, and it is feasible to use the ratio of the constrained light extinction method to overcome the influence of water droplet. The inverse results all remain satisfactory when 2% stochastic noise is added to the value of the light extinction.
NASA Astrophysics Data System (ADS)
Schott, John R.; Brown, Scott D.; Raqueno, Rolando V.; Gross, Harry N.; Robinson, Gary
1999-01-01
The need for robust image data sets for algorithm development and testing has prompted the consideration of synthetic imagery as a supplement to real imagery. The unique ability of synthetic image generation (SIG) tools to supply per-pixel truth allows algorithm writers to test difficult scenarios that would require expensive collection and instrumentation efforts. In addition, SIG data products can supply the user with `actual' truth measurements of the entire image area that are not subject to measurement error thereby allowing the user to more accurately evaluate the performance of their algorithm. Advanced algorithms place a high demand on synthetic imagery to reproduce both the spectro-radiometric and spatial character observed in real imagery. This paper describes a synthetic image generation model that strives to include the radiometric processes that affect spectral image formation and capture. In particular, it addresses recent advances in SIG modeling that attempt to capture the spatial/spectral correlation inherent in real images. The model is capable of simultaneously generating imagery from a wide range of sensors allowing it to generate daylight, low-light-level and thermal image inputs for broadband, multi- and hyper-spectral exploitation algorithms.
Hybrid Image Fusion for Sharpness Enhancement of Multi-Spectral Lunar Images
NASA Astrophysics Data System (ADS)
Awumah, Anna; Mahanti, Prasun; Robinson, Mark
2016-10-01
Image fusion enhances the sharpness of a multi-spectral (MS) image by incorporating spatial details from a higher-resolution panchromatic (Pan) image [1,2]. Known applications of image fusion for planetary images are rare, although image fusion is well-known for its applications to Earth-based remote sensing. In a recent work [3], six different image fusion algorithms were implemented and their performances were verified with images from the Lunar Reconnaissance Orbiter (LRO) Camera. The image fusion procedure obtained a high-resolution multi-spectral (HRMS) product from the LRO Narrow Angle Camera (used as Pan) and LRO Wide Angle Camera (used as MS) images. The results showed that the Intensity-Hue-Saturation (IHS) algorithm results in a high-spatial quality product while the Wavelet-based image fusion algorithm best preserves spectral quality among all the algorithms. In this work we show the results of a hybrid IHS-Wavelet image fusion algorithm when applied to LROC MS images. The hybrid method provides the best HRMS product - both in terms of spatial resolution and preservation of spectral details. Results from hybrid image fusion can enable new science and increase the science return from existing LROC images.[1] Pohl, Cle, and John L. Van Genderen. "Review article multisensor image fusion in remote sensing: concepts, methods and applications." International journal of remote sensing 19.5 (1998): 823-854.[2] Zhang, Yun. "Understanding image fusion." Photogramm. Eng. Remote Sens 70.6 (2004): 657-661.[3] Mahanti, Prasun et al. "Enhancement of spatial resolution of the LROC Wide Angle Camera images." Archives, XXIII ISPRS Congress Archives (2016).
Liu, Jianbo; Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Neal, Maxwell; Cashmere, David J; Germain, Anne; Reifman, Jaques
2018-02-01
Electroencephalography (EEG) recordings during sleep are often contaminated by muscle and ocular artefacts, which can affect the results of spectral power analyses significantly. However, the extent to which these artefacts affect EEG spectral power across different sleep states has not been quantified explicitly. Consequently, the effectiveness of automated artefact-rejection algorithms in minimizing these effects has not been characterized fully. To address these issues, we analysed standard 10-channel EEG recordings from 20 subjects during one night of sleep. We compared their spectral power when the recordings were contaminated by artefacts and after we removed them by visual inspection or by using automated artefact-rejection algorithms. During both rapid eye movement (REM) and non-REM (NREM) sleep, muscle artefacts contaminated no more than 5% of the EEG data across all channels. However, they corrupted delta, beta and gamma power levels substantially by up to 126, 171 and 938%, respectively, relative to the power level computed from artefact-free data. Although ocular artefacts were infrequent during NREM sleep, they affected up to 16% of the frontal and temporal EEG channels during REM sleep, primarily corrupting delta power by up to 33%. For both REM and NREM sleep, the automated artefact-rejection algorithms matched power levels to within ~10% of the artefact-free power level for each EEG channel and frequency band. In summary, although muscle and ocular artefacts affect only a small fraction of EEG data, they affect EEG spectral power significantly. This suggests the importance of using artefact-rejection algorithms before analysing EEG data. © 2017 European Sleep Research Society.
NASA Astrophysics Data System (ADS)
Belmonte, D.; Vedova, M. D. L. Dalla; Ferro, C.; Maggiore, P.
2017-06-01
The proposal of prognostic algorithms able to identify precursors of incipient failures of primary flight command electromechanical actuators (EMA) is beneficial for the anticipation of the incoming failure: an early and correct interpretation of the failure degradation pattern, in fact, can trig an early alert of the maintenance crew, who can properly schedule the servomechanism replacement. An innovative prognostic model-based approach, able to recognize the EMA progressive degradations before his anomalous behaviors become critical, is proposed: the Fault Detection and Identification (FDI) of the considered incipient failures is performed analyzing proper system operational parameters, able to put in evidence the corresponding degradation path, by means of a numerical algorithm based on spectral analysis techniques. Subsequently, these operational parameters will be correlated with the actual EMA health condition by means of failure maps created by a reference monitoring model-based algorithm. In this work, the proposed method has been tested in case of EMA affected by combined progressive failures: in particular, partial stator single phase turn to turn short-circuit and rotor static eccentricity are considered. In order to evaluate the prognostic method, a numerical test-bench has been conceived. Results show that the method exhibit adequate robustness and a high degree of confidence in the ability to early identify an eventual malfunctioning, minimizing the risk of fake alarms or unannounced failures.
Modified fuzzy c-means applied to a Bragg grating-based spectral imager for material clustering
NASA Astrophysics Data System (ADS)
Rodríguez, Aida; Nieves, Juan Luis; Valero, Eva; Garrote, Estíbaliz; Hernández-Andrés, Javier; Romero, Javier
2012-01-01
We have modified the Fuzzy C-Means algorithm for an application related to segmentation of hyperspectral images. Classical fuzzy c-means algorithm uses Euclidean distance for computing sample membership to each cluster. We have introduced a different distance metric, Spectral Similarity Value (SSV), in order to have a more convenient similarity measure for reflectance information. SSV distance metric considers both magnitude difference (by the use of Euclidean distance) and spectral shape (by the use of Pearson correlation). Experiments confirmed that the introduction of this metric improves the quality of hyperspectral image segmentation, creating spectrally more dense clusters and increasing the number of correctly classified pixels.
GIFTS SM EDU Radiometric and Spectral Calibrations
NASA Technical Reports Server (NTRS)
Tian, J.; Reisse, R. a.; Johnson, D. G.; Gazarik, J. J.
2007-01-01
The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiance using a Fourier transform spectrometer (FTS). The GIFTS instrument gathers measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the processing algorithms involved in the calibration. The calibration procedures can be subdivided into three categories: the pre-calibration stage, the calibration stage, and finally, the post-calibration stage. Detailed derivations for each stage are presented in this paper.
Novel trace chemical detection algorithms: a comparative study
NASA Astrophysics Data System (ADS)
Raz, Gil; Murphy, Cara; Georgan, Chelsea; Greenwood, Ross; Prasanth, R. K.; Myers, Travis; Goyal, Anish; Kelley, David; Wood, Derek; Kotidis, Petros
2017-05-01
Algorithms for standoff detection and estimation of trace chemicals in hyperspectral images in the IR band are a key component for a variety of applications relevant to law-enforcement and the intelligence communities. Performance of these methods is impacted by the spectral signature variability due to presence of contaminants, surface roughness, nonlinear dependence on abundances as well as operational limitations on the compute platforms. In this work we provide a comparative performance and complexity analysis of several classes of algorithms as a function of noise levels, error distribution, scene complexity, and spatial degrees of freedom. The algorithm classes we analyze and test include adaptive cosine estimator (ACE and modifications to it), compressive/sparse methods, Bayesian estimation, and machine learning. We explicitly call out the conditions under which each algorithm class is optimal or near optimal as well as their built-in limitations and failure modes.
A joint tracking method for NSCC based on WLS algorithm
NASA Astrophysics Data System (ADS)
Luo, Ruidan; Xu, Ying; Yuan, Hong
2017-12-01
Navigation signal based on compound carrier (NSCC), has the flexible multi-carrier scheme and various scheme parameters configuration, which enables it to possess significant efficiency of navigation augmentation in terms of spectral efficiency, tracking accuracy, multipath mitigation capability and anti-jamming reduction compared with legacy navigation signals. Meanwhile, the typical scheme characteristics can provide auxiliary information for signal synchronism algorithm design. This paper, based on the characteristics of NSCC, proposed a kind of joint tracking method utilizing Weighted Least Square (WLS) algorithm. In this method, the LS algorithm is employed to jointly estimate each sub-carrier frequency shift with the frequency-Doppler linear relationship, by utilizing the known sub-carrier frequency. Besides, the weighting matrix is set adaptively according to the sub-carrier power to ensure the estimation accuracy. Both the theory analysis and simulation results illustrate that the tracking accuracy and sensitivity of this method outperforms the single-carrier algorithm with lower SNR.
Naishadham, Krishna; Piou, Jean E; Ren, Lingyun; Fathy, Aly E
2016-12-01
Ultra wideband (UWB) Doppler radar has many biomedical applications, including remote diagnosis of cardiovascular disease, triage and real-time personnel tracking in rescue missions. It uses narrow pulses to probe the human body and detect tiny cardiopulmonary movements by spectral analysis of the backscattered electromagnetic (EM) field. With the help of super-resolution spectral algorithms, UWB radar is capable of increased accuracy for estimating vital signs such as heart and respiration rates in adverse signal-to-noise conditions. A major challenge for biomedical radar systems is detecting the heartbeat of a subject with high accuracy, because of minute thorax motion (less than 0.5 mm) caused by the heartbeat. The problem becomes compounded by EM clutter and noise in the environment. In this paper, we introduce a new algorithm based on the state space method (SSM) for the extraction of cardiac and respiration rates from UWB radar measurements. SSM produces range-dependent system poles that can be classified parametrically with spectral peaks at the cardiac and respiratory frequencies. It is shown that SSM produces accurate estimates of the vital signs without producing harmonics and inter-modulation products that plague signal resolution in widely used FFT spectrograms.
A l% and 1cm Perspective Leads to a Novel CDOM Absorption Algorithm
NASA Technical Reports Server (NTRS)
Morrow, J. H.; Hooker, S. B.; Matsuoka, A.
2012-01-01
A next-generation in-water profiler designed to measure the apparent optical properties of seawater was developed and validated across a wide dynamic range of water properties. This new Compact-Optical Profiling System (C-OPS) design uses a novel, kite-shaped, free-falling backplane with adjustable buoyancy and is based on 19 state-of-the-art microradiometers, spanning 320-780 nm. Data collected as part of the field commissioning were of a previously unachievable quality and showed that systematic uncertainties in the sampling protocols were discernible at the 1% optical and 1cm depth resolution levels. A sensitivity analysis as a function of three water types, established by the peak in the remote sensing reflectance spectra, revealed which water types and spectral domains were the most indicative of data acquisition uncertainties. The unprecedented vertical resolution of C-OPS measurements provided near-surface data products at the spectral endpoints with a quality level that has not been obtainable. The improved data allowed development of an algorithm for predicting the spectral absorption due to chromophoric dissolved organic matter (CDOM) using ratios of diffuse attenuation coefficients with over 99% of the variance in the data explained.
Improved Atmospheric Soundings and Error Estimates from Analysis of AIRS/AMSU Data
NASA Technical Reports Server (NTRS)
Susskind, Joel
2007-01-01
The AIRS Science Team Version 5.0 retrieval algorithm became operational at the Goddard DAAC in July 2007 generating near real-time products from analysis of AIRS/AMSU sounding data. This algorithm contains many significant theoretical advances over the AIRS Science Team Version 4.0 retrieval algorithm used previously. Three very significant developments of Version 5 are: 1) the development and implementation of an improved Radiative Transfer Algorithm (RTA) which allows for accurate treatment of non-Local Thermodynamic Equilibrium (non-LTE) effects on shortwave sounding channels; 2) the development of methodology to obtain very accurate case by case product error estimates which are in turn used for quality control; and 3) development of an accurate AIRS only cloud clearing and retrieval system. These theoretical improvements taken together enabled a new methodology to be developed which further improves soundings in partially cloudy conditions, without the need for microwave observations in the cloud clearing step as has been done previously. In this methodology, longwave C02 channel observations in the spectral region 700 cm-' to 750 cm-' are used exclusively for cloud clearing purposes, while shortwave C02 channels in the spectral region 2195 cm-' to 2395 cm-' are used for temperature sounding purposes. The new methodology for improved error estimates and their use in quality control is described briefly and results are shown indicative of their accuracy. Results are also shown of forecast impact experiments assimilating AIRS Version 5.0 retrieval products in the Goddard GEOS 5 Data Assimilation System using different quality control thresholds.
A framelet-based iterative maximum-likelihood reconstruction algorithm for spectral CT
NASA Astrophysics Data System (ADS)
Wang, Yingmei; Wang, Ge; Mao, Shuwei; Cong, Wenxiang; Ji, Zhilong; Cai, Jian-Feng; Ye, Yangbo
2016-11-01
Standard computed tomography (CT) cannot reproduce spectral information of an object. Hardware solutions include dual-energy CT which scans the object twice in different x-ray energy levels, and energy-discriminative detectors which can separate lower and higher energy levels from a single x-ray scan. In this paper, we propose a software solution and give an iterative algorithm that reconstructs an image with spectral information from just one scan with a standard energy-integrating detector. The spectral information obtained can be used to produce color CT images, spectral curves of the attenuation coefficient μ (r,E) at points inside the object, and photoelectric images, which are all valuable imaging tools in cancerous diagnosis. Our software solution requires no change on hardware of a CT machine. With the Shepp-Logan phantom, we have found that although the photoelectric and Compton components were not perfectly reconstructed, their composite effect was very accurately reconstructed as compared to the ground truth and the dual-energy CT counterpart. This means that our proposed method has an intrinsic benefit in beam hardening correction and metal artifact reduction. The algorithm is based on a nonlinear polychromatic acquisition model for x-ray CT. The key technique is a sparse representation of iterations in a framelet system. Convergence of the algorithm is studied. This is believed to be the first application of framelet imaging tools to a nonlinear inverse problem.
Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.
2016-01-01
Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in-vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43~73%) without sacrificing CT number accuracy or spatial resolution. PMID:27551878
NASA Astrophysics Data System (ADS)
Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.
2016-09-01
Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43-73%) without sacrificing CT number accuracy or spatial resolution.
DOA estimation of noncircular signals for coprime linear array via locally reduced-dimensional Capon
NASA Astrophysics Data System (ADS)
Zhai, Hui; Zhang, Xiaofei; Zheng, Wang
2018-05-01
We investigate the issue of direction of arrival (DOA) estimation of noncircular signals for coprime linear array (CLA). The noncircular property enhances the degree of freedom and improves angle estimation performance, but it leads to a more complex angle ambiguity problem. To eliminate ambiguity, we theoretically prove that the actual DOAs of noncircular signals can be uniquely estimated by finding the coincide results from the two decomposed subarrays based on the coprimeness. We propose a locally reduced-dimensional (RD) Capon algorithm for DOA estimation of noncircular signals for CLA. The RD processing is used in the proposed algorithm to avoid two dimensional (2D) spectral peak search, and coprimeness is employed to avoid the global spectral peak search. The proposed algorithm requires one-dimensional locally spectral peak search, and it has very low computational complexity. Furthermore, the proposed algorithm needs no prior knowledge of the number of sources. We also derive the Crámer-Rao bound of DOA estimation of noncircular signals in CLA. Numerical simulation results demonstrate the effectiveness and superiority of the algorithm.
Comparison of GOES Cloud Classification Algorithms Employing Explicit and Implicit Physics
NASA Technical Reports Server (NTRS)
Bankert, Richard L.; Mitrescu, Cristian; Miller, Steven D.; Wade, Robert H.
2009-01-01
Cloud-type classification based on multispectral satellite imagery data has been widely researched and demonstrated to be useful for distinguishing a variety of classes using a wide range of methods. The research described here is a comparison of the classifier output from two very different algorithms applied to Geostationary Operational Environmental Satellite (GOES) data over the course of one year. The first algorithm employs spectral channel thresholding and additional physically based tests. The second algorithm was developed through a supervised learning method with characteristic features of expertly labeled image samples used as training data for a 1-nearest-neighbor classification. The latter's ability to identify classes is also based in physics, but those relationships are embedded implicitly within the algorithm. A pixel-to-pixel comparison analysis was done for hourly daytime scenes within a region in the northeastern Pacific Ocean. Considerable agreement was found in this analysis, with many of the mismatches or disagreements providing insight to the strengths and limitations of each classifier. Depending upon user needs, a rule-based or other postprocessing system that combines the output from the two algorithms could provide the most reliable cloud-type classification.
Spectral correction algorithm for multispectral CdTe x-ray detectors
NASA Astrophysics Data System (ADS)
Christensen, Erik D.; Kehres, Jan; Gu, Yun; Feidenhans'l, Robert; Olsen, Ulrik L.
2017-09-01
Compared to the dual energy scintillator detectors widely used today, pixelated multispectral X-ray detectors show the potential to improve material identification in various radiography and tomography applications used for industrial and security purposes. However, detector effects, such as charge sharing and photon pileup, distort the measured spectra in high flux pixelated multispectral detectors. These effects significantly reduce the detectors' capabilities to be used for material identification, which requires accurate spectral measurements. We have developed a semi analytical computational algorithm for multispectral CdTe X-ray detectors which corrects the measured spectra for severe spectral distortions caused by the detector. The algorithm is developed for the Multix ME100 CdTe X-ray detector, but could potentially be adapted for any pixelated multispectral CdTe detector. The calibration of the algorithm is based on simple attenuation measurements of commercially available materials using standard laboratory sources, making the algorithm applicable in any X-ray setup. The validation of the algorithm has been done using experimental data acquired with both standard lab equipment and synchrotron radiation. The experiments show that the algorithm is fast, reliable even at X-ray flux up to 5 Mph/s/mm2, and greatly improves the accuracy of the measured X-ray spectra, making the algorithm very useful for both security and industrial applications where multispectral detectors are used.
NASA Astrophysics Data System (ADS)
Bauer, Jacob R.; van Beekum, Karlijn; Klaessens, John; Noordmans, Herke Jan; Boer, Christa; Hardeberg, Jon Y.; Verdaasdonk, Rudolf M.
2018-02-01
Non contact spatial resolved oxygenation measurements remain an open challenge in the biomedical field and non contact patient monitoring. Although point measurements are the clinical standard till this day, regional differences in the oxygenation will improve the quality and safety of care. Recent developments in spectral imaging resulted in spectral filter array cameras (SFA). These provide the means to acquire spatial spectral videos in real-time and allow a spatial approach to spectroscopy. In this study, the performance of a 25 channel near infrared SFA camera was studied to obtain spatial oxygenation maps of hands during an occlusion of the left upper arm in 7 healthy volunteers. For comparison a clinical oxygenation monitoring system, INVOS, was used as a reference. In case of the NIRS SFA camera, oxygenation curves were derived from 2-3 wavelength bands with a custom made fast analysis software using a basic algorithm. Dynamic oxygenation changes were determined with the NIR SFA camera and INVOS system at different regional locations of the occluded versus non-occluded hands and showed to be in good agreement. To increase the signal to noise ratio, algorithm and image acquisition were optimised. The measurement were robust to different illumination conditions with NIR light sources. This study shows that imaging of relative oxygenation changes over larger body areas is potentially possible in real time.
Chen, Yao-Yi; Dasari, Surendra; Ma, Ze-Qiang; Vega-Montoto, Lorenzo J.; Li, Ming
2013-01-01
Spectral counting has become a widely used approach for measuring and comparing protein abundance in label-free shotgun proteomics. However, when analyzing complex samples, the ambiguity of matching between peptides and proteins greatly affects the assessment of peptide and protein inventories, differentiation, and quantification. Meanwhile, the configuration of database searching algorithms that assign peptides to MS/MS spectra may produce different results in comparative proteomic analysis. Here, we present three strategies to improve comparative proteomics through spectral counting. We show that comparing spectral counts for peptide groups rather than for protein groups forestalls problems introduced by shared peptides. We demonstrate the advantage and flexibility of this new method in two datasets. We present four models to combine four popular search engines that lead to significant gains in spectral counting differentiation. Among these models, we demonstrate a powerful vote counting model that scales well for multiple search engines. We also show that semi-tryptic searching outperforms tryptic searching for comparative proteomics. Overall, these techniques considerably improve protein differentiation on the basis of spectral count tables. PMID:22552787
Chen, Yao-Yi; Dasari, Surendra; Ma, Ze-Qiang; Vega-Montoto, Lorenzo J; Li, Ming; Tabb, David L
2012-09-01
Spectral counting has become a widely used approach for measuring and comparing protein abundance in label-free shotgun proteomics. However, when analyzing complex samples, the ambiguity of matching between peptides and proteins greatly affects the assessment of peptide and protein inventories, differentiation, and quantification. Meanwhile, the configuration of database searching algorithms that assign peptides to MS/MS spectra may produce different results in comparative proteomic analysis. Here, we present three strategies to improve comparative proteomics through spectral counting. We show that comparing spectral counts for peptide groups rather than for protein groups forestalls problems introduced by shared peptides. We demonstrate the advantage and flexibility of this new method in two datasets. We present four models to combine four popular search engines that lead to significant gains in spectral counting differentiation. Among these models, we demonstrate a powerful vote counting model that scales well for multiple search engines. We also show that semi-tryptic searching outperforms tryptic searching for comparative proteomics. Overall, these techniques considerably improve protein differentiation on the basis of spectral count tables.
Samanipour, Saer; Baz-Lomba, Jose A; Alygizakis, Nikiforos A; Reid, Malcolm J; Thomaidis, Nikolaos S; Thomas, Kevin V
2017-06-09
LC-HR-QTOF-MS recently has become a commonly used approach for the analysis of complex samples. However, identification of small organic molecules in complex samples with the highest level of confidence is a challenging task. Here we report on the implementation of a two stage algorithm for LC-HR-QTOF-MS datasets. We compared the performances of the two stage algorithm, implemented via NIVA_MZ_Analyzer™, with two commonly used approaches (i.e. feature detection and XIC peak picking, implemented via UNIFI by Waters and TASQ by Bruker, respectively) for the suspect analysis of four influent wastewater samples. We first evaluated the cross platform compatibility of LC-HR-QTOF-MS datasets generated via instruments from two different manufacturers (i.e. Waters and Bruker). Our data showed that with an appropriate spectral weighting function the spectra recorded by the two tested instruments are comparable for our analytes. As a consequence, we were able to perform full spectral comparison between the data generated via the two studied instruments. Four extracts of wastewater influent were analyzed for 89 analytes, thus 356 detection cases. The analytes were divided into 158 detection cases of artificial suspect analytes (i.e. verified by target analysis) and 198 true suspects. The two stage algorithm resulted in a zero rate of false positive detection, based on the artificial suspect analytes while producing a rate of false negative detection of 0.12. For the conventional approaches, the rates of false positive detection varied between 0.06 for UNIFI and 0.15 for TASQ. The rates of false negative detection for these methods ranged between 0.07 for TASQ and 0.09 for UNIFI. The effect of background signal complexity on the two stage algorithm was evaluated through the generation of a synthetic signal. We further discuss the boundaries of applicability of the two stage algorithm. The importance of background knowledge and experience in evaluating the reliability of results during the suspect screening was evaluated. Copyright © 2017 Elsevier B.V. All rights reserved.
Ionospheric Irregularities: Source, Structure, Plasma Processes and Effects on Sensor Systems
1989-10-31
modified by using y - ye + w sin 0 sin 0 (20h) some of M orf et al.’s ideas. Fougere [1981] added a set of subroutines which find for each channel, the...correlation analysis has employed the modified Fedor algo- rithm (Fedor. 1967. CFB, this issue]. 3. ALGORITHM FOR THE SPECTRAL ANALYSIS The elements of the...knowledge, this paper presents with the every 240 m along the orbital track [Ifeelis et al., 1981]. Thus above data set the most detailed analysis of
Pavlic, Marion; Libiseller, Kathrin; Oberacher, Herbert
2006-09-01
The potential of the combined use of ESI-QqTOF-MS and ESI-QqTOF-MS/MS with mass-spectral library search for the identification of therapeutic and illicit drugs has been evaluated. Reserpine was used for standardizing experimental conditions and for characterization of the performance of the applied mass spectrometric system. Experiments revealed that because of the mass accuracy, the stability of calibration, and the reproducibility of fragmentation, the QqTOF mass spectrometer is an appropriate platform for establishment of a tandem-mass-spectral library. Three-hundred and nineteen substances were used as reference samples to build the spectral library. For each reference compound, product-ion spectra were acquired at ten different collision-energy values between 5 eV and 50 eV. For identification of unknown compounds, a library search algorithm was developed. The closeness of matching between a measured product-ion spectrum and a spectrum stored in the library was characterized by a value called "match probability", which took into account the number of matched fragment ions, the number of fragment ions observed in the two spectra, and the sum of the intensity differences calculated for matching fragments. A large value for the match probability indicated a close match between the measured and the reference spectrum. A unique feature of the library search algorithm-an implemented spectral purification option-enables characterization of multi-contributor fragment-ion spectra. With the aid of this software feature, substances comprising only 1.0% of the total amount of binary mixtures were unequivocally assigned, in addition to the isobaric main contributors. The spectral library was successfully applied to the characterization of 39 forensic casework samples.
Spectral feature design in high dimensional multispectral data
NASA Technical Reports Server (NTRS)
Chen, Chih-Chien Thomas; Landgrebe, David A.
1988-01-01
The High resolution Imaging Spectrometer (HIRIS) is designed to acquire images simultaneously in 192 spectral bands in the 0.4 to 2.5 micrometers wavelength region. It will make possible the collection of essentially continuous reflectance spectra at a spectral resolution sufficient to extract significantly enhanced amounts of information from return signals as compared to existing systems. The advantages of such high dimensional data come at a cost of increased system and data complexity. For example, since the finer the spectral resolution, the higher the data rate, it becomes impractical to design the sensor to be operated continuously. It is essential to find new ways to preprocess the data which reduce the data rate while at the same time maintaining the information content of the high dimensional signal produced. Four spectral feature design techniques are developed from the Weighted Karhunen-Loeve Transforms: (1) non-overlapping band feature selection algorithm; (2) overlapping band feature selection algorithm; (3) Walsh function approach; and (4) infinite clipped optimal function approach. The infinite clipped optimal function approach is chosen since the features are easiest to find and their classification performance is the best. After the preprocessed data has been received at the ground station, canonical analysis is further used to find the best set of features under the criterion that maximal class separability is achieved. Both 100 dimensional vegetation data and 200 dimensional soil data were used to test the spectral feature design system. It was shown that the infinite clipped versions of the first 16 optimal features had excellent classification performance. The overall probability of correct classification is over 90 percent while providing for a reduced downlink data rate by a factor of 10.
Spectral dispersion and fringe detection in IOTA
NASA Technical Reports Server (NTRS)
Traub, W. A.; Lacasse, M. G.; Carleton, N. P.
1990-01-01
Pupil plane beam combination, spectral dispersion, detection, and fringe tracking are discussed for the IOTA interferometer. A new spectrometer design is presented in which the angular dispersion with respect to wavenumber is nearly constant. The dispersing element is a type of grism, a series combination of grating and prism, in which the constant parts of the dispersion add, but the slopes cancel. This grism is optimized for the display of channelled spectra. The dispersed fringes can be tracked by a matched-filter photon-counting correlator algorithm. This algorithm requires very few arithmetic operations per detected photon, making it well-suited for real-time fringe tracking. The algorithm is able to adapt to different stellar spectral types, intensity levels, and atmospheric time constants. The results of numerical experiments are reported.
Salehizadeh, Seyed M. A.; Dao, Duy; Bolkhovsky, Jeffrey; Cho, Chae; Mendelson, Yitzhak; Chon, Ki H.
2015-01-01
Accurate estimation of heart rates from photoplethysmogram (PPG) signals during intense physical activity is a very challenging problem. This is because strenuous and high intensity exercise can result in severe motion artifacts in PPG signals, making accurate heart rate (HR) estimation difficult. In this study we investigated a novel technique to accurately reconstruct motion-corrupted PPG signals and HR based on time-varying spectral analysis. The algorithm is called Spectral filter algorithm for Motion Artifacts and heart rate reconstruction (SpaMA). The idea is to calculate the power spectral density of both PPG and accelerometer signals for each time shift of a windowed data segment. By comparing time-varying spectra of PPG and accelerometer data, those frequency peaks resulting from motion artifacts can be distinguished from the PPG spectrum. The SpaMA approach was applied to three different datasets and four types of activities: (1) training datasets from the 2015 IEEE Signal Process. Cup Database recorded from 12 subjects while performing treadmill exercise from 1 km/h to 15 km/h; (2) test datasets from the 2015 IEEE Signal Process. Cup Database recorded from 11 subjects while performing forearm and upper arm exercise. (3) Chon Lab dataset including 10 min recordings from 10 subjects during treadmill exercise. The ECG signals from all three datasets provided the reference HRs which were used to determine the accuracy of our SpaMA algorithm. The performance of the SpaMA approach was calculated by computing the mean absolute error between the estimated HR from the PPG and the reference HR from the ECG. The average estimation errors using our method on the first, second and third datasets are 0.89, 1.93 and 1.38 beats/min respectively, while the overall error on all 33 subjects is 1.86 beats/min and the performance on only treadmill experiment datasets (22 subjects) is 1.11 beats/min. Moreover, it was found that dynamics of heart rate variability can be accurately captured using the algorithm where the mean Pearson’s correlation coefficient between the power spectral densities of the reference and the reconstructed heart rate time series was found to be 0.98. These results show that the SpaMA method has a potential for PPG-based HR monitoring in wearable devices for fitness tracking and health monitoring during intense physical activities. PMID:26703618
Salehizadeh, Seyed M A; Dao, Duy; Bolkhovsky, Jeffrey; Cho, Chae; Mendelson, Yitzhak; Chon, Ki H
2015-12-23
Accurate estimation of heart rates from photoplethysmogram (PPG) signals during intense physical activity is a very challenging problem. This is because strenuous and high intensity exercise can result in severe motion artifacts in PPG signals, making accurate heart rate (HR) estimation difficult. In this study we investigated a novel technique to accurately reconstruct motion-corrupted PPG signals and HR based on time-varying spectral analysis. The algorithm is called Spectral filter algorithm for Motion Artifacts and heart rate reconstruction (SpaMA). The idea is to calculate the power spectral density of both PPG and accelerometer signals for each time shift of a windowed data segment. By comparing time-varying spectra of PPG and accelerometer data, those frequency peaks resulting from motion artifacts can be distinguished from the PPG spectrum. The SpaMA approach was applied to three different datasets and four types of activities: (1) training datasets from the 2015 IEEE Signal Process. Cup Database recorded from 12 subjects while performing treadmill exercise from 1 km/h to 15 km/h; (2) test datasets from the 2015 IEEE Signal Process. Cup Database recorded from 11 subjects while performing forearm and upper arm exercise. (3) Chon Lab dataset including 10 min recordings from 10 subjects during treadmill exercise. The ECG signals from all three datasets provided the reference HRs which were used to determine the accuracy of our SpaMA algorithm. The performance of the SpaMA approach was calculated by computing the mean absolute error between the estimated HR from the PPG and the reference HR from the ECG. The average estimation errors using our method on the first, second and third datasets are 0.89, 1.93 and 1.38 beats/min respectively, while the overall error on all 33 subjects is 1.86 beats/min and the performance on only treadmill experiment datasets (22 subjects) is 1.11 beats/min. Moreover, it was found that dynamics of heart rate variability can be accurately captured using the algorithm where the mean Pearson's correlation coefficient between the power spectral densities of the reference and the reconstructed heart rate time series was found to be 0.98. These results show that the SpaMA method has a potential for PPG-based HR monitoring in wearable devices for fitness tracking and health monitoring during intense physical activities.
Method for exploiting bias in factor analysis using constrained alternating least squares algorithms
Keenan, Michael R.
2008-12-30
Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.
Least-squares Legendre spectral element solutions to sound propagation problems.
Lin, W H
2001-02-01
This paper presents a novel algorithm and numerical results of sound wave propagation. The method is based on a least-squares Legendre spectral element approach for spatial discretization and the Crank-Nicolson [Proc. Cambridge Philos. Soc. 43, 50-67 (1947)] and Adams-Bashforth [D. Gottlieb and S. A. Orszag, Numerical Analysis of Spectral Methods: Theory and Applications (CBMS-NSF Monograph, Siam 1977)] schemes for temporal discretization to solve the linearized acoustic field equations for sound propagation. Two types of NASA Computational Aeroacoustics (CAA) Workshop benchmark problems [ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics, edited by J. C. Hardin, J. R. Ristorcelli, and C. K. W. Tam, NASA Conference Publication 3300, 1995a] are considered: a narrow Gaussian sound wave propagating in a one-dimensional space without flows, and the reflection of a two-dimensional acoustic pulse off a rigid wall in the presence of a uniform flow of Mach 0.5 in a semi-infinite space. The first problem was used to examine the numerical dispersion and dissipation characteristics of the proposed algorithm. The second problem was to demonstrate the capability of the algorithm in treating sound propagation in a flow. Comparisons were made of the computed results with analytical results and results obtained by other methods. It is shown that all results computed by the present method are in good agreement with the analytical solutions and results of the first problem agree very well with those predicted by other schemes.
Machine Learning Method for Pattern Recognition in Volcano Seismic Spectra
NASA Astrophysics Data System (ADS)
Radic, V.; Unglert, K.; Jellinek, M.
2016-12-01
Variations in the spectral content of volcano seismicity related to changes in volcanic activity are commonly identified manually in spectrograms. However, long time series of monitoring data at volcano observatories require tools to facilitate automated and rapid processing. Techniques such as Self-Organizing Maps (SOM), Principal Component Analysis (PCA) and clustering methods can help to quickly and automatically identify important patterns related to impending eruptions. In this study we develop and evaluate an algorithm applied on a set of synthetic volcano seismic spectra as well as observed spectra from Kılauea Volcano, Hawai`i. Our goal is to retrieve a set of known spectral patterns that are associated with dominant phases of volcanic tremor before, during, and after periods of volcanic unrest. The algorithm is based on training a SOM on the spectra and then identifying local maxima and minima on the SOM 'topography'. The topography is derived from the first two PCA modes so that the maxima represent the SOM patterns that carry most of the variance in the spectra. Patterns identified in this way reproduce the known set of spectra. Our results show that, regardless of the level of white noise in the spectra, the algorithm can accurately reproduce the characteristic spectral patterns and their occurrence in time. The ability to rapidly classify spectra of volcano seismic data without prior knowledge of the character of the seismicity at a given volcanic system holds great potential for real time or near-real time applications, and thus ultimately for eruption forecasting.
NASA Astrophysics Data System (ADS)
Pu, Huangsheng; Zhang, Guanglei; He, Wei; Liu, Fei; Guang, Huizhi; Zhang, Yue; Bai, Jing; Luo, Jianwen
2014-09-01
It is a challenging problem to resolve and identify drug (or non-specific fluorophore) distribution throughout the whole body of small animals in vivo. In this article, an algorithm of unmixing multispectral fluorescence tomography (MFT) images based on independent component analysis (ICA) is proposed to solve this problem. ICA is used to unmix the data matrix assembled by the reconstruction results from MFT. Then the independent components (ICs) that represent spatial structures and the corresponding spectrum courses (SCs) which are associated with spectral variations can be obtained. By combining the ICs with SCs, the recovered MFT images can be generated and fluorophore concentration can be calculated. Simulation studies, phantom experiments and animal experiments with different concentration contrasts and spectrum combinations are performed to test the performance of the proposed algorithm. Results demonstrate that the proposed algorithm can not only provide the spatial information of fluorophores, but also recover the actual reconstruction of MFT images.
Heart sound segmentation of pediatric auscultations using wavelet analysis.
Castro, Ana; Vinhoza, Tiago T V; Mattos, Sandra S; Coimbra, Miguel T
2013-01-01
Auscultation is widely applied in clinical activity, nonetheless sound interpretation is dependent on clinician training and experience. Heart sound features such as spatial loudness, relative amplitude, murmurs, and localization of each component may be indicative of pathology. In this study we propose a segmentation algorithm to extract heart sound components (S1 and S2) based on it's time and frequency characteristics. This algorithm takes advantage of the knowledge of the heart cycle times (systolic and diastolic periods) and of the spectral characteristics of each component, through wavelet analysis. Data collected in a clinical environment, and annotated by a clinician was used to assess algorithm's performance. Heart sound components were correctly identified in 99.5% of the annotated events. S1 and S2 detection rates were 90.9% and 93.3% respectively. The median difference between annotated and detected events was of 33.9 ms.
GIFTS SM EDU Data Processing and Algorithms
NASA Technical Reports Server (NTRS)
Tian, Jialin; Johnson, David G.; Reisse, Robert A.; Gazarik, Michael J.
2007-01-01
The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three Focal Plane Arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the processing algorithms involved in the calibration stage. The calibration procedures can be subdivided into three stages. In the pre-calibration stage, a phase correction algorithm is applied to the decimated and filtered complex interferogram. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected blackbody reference spectra. In the radiometric calibration stage, we first compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. During the post-processing stage, we estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. We then implement a correction scheme that compensates for the effect of fore-optics offsets. Finally, for off-axis pixels, the FPA off-axis effects correction is performed. To estimate the performance of the entire FPA, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is designed based on the pixel performance evaluation.
NASA Technical Reports Server (NTRS)
Meng, J. C. S.; Thomson, J. A. L.
1975-01-01
A data analysis program constructed to assess LDV system performance, to validate the simulation model, and to test various vortex location algorithms is presented. Real or simulated Doppler spectra versus range and elevation is used and the spatial distributions of various spectral moments or other spectral characteristics are calculated and displayed. Each of the real or simulated scans can be processed by one of three different procedures: simple frequency or wavenumber filtering, matched filtering, and deconvolution filtering. The final output is displayed as contour plots in an x-y coordinate system, as well as in the form of vortex tracks deduced from the maxima of the processed data. A detailed analysis of run number 1023 and run number 2023 is presented to demonstrate the data analysis procedure. Vortex tracks and system range resolutions are compared with theoretical predictions.
NASA Astrophysics Data System (ADS)
Shuxia, ZHAO; Lei, ZHANG; Jiajia, HOU; Yang, ZHAO; Wangbao, YIN; Weiguang, MA; Lei, DONG; Liantuan, XIAO; Suotang, JIA
2018-03-01
The chemical composition of alloys directly determines their mechanical behaviors and application fields. Accurate and rapid analysis of both major and minor elements in alloys plays a key role in metallurgy quality control and material classification processes. A quantitative calibration-free laser-induced breakdown spectroscopy (CF-LIBS) analysis method, which carries out combined correction of plasma temperature and spectral intensity by using a second-order iterative algorithm and two boundary standard samples, is proposed to realize accurate composition measurements. Experimental results show that, compared to conventional CF-LIBS analysis, the relative errors for major elements Cu and Zn and minor element Pb in the copper-lead alloys has been reduced from 12%, 26% and 32% to 1.8%, 2.7% and 13.4%, respectively. The measurement accuracy for all elements has been improved substantially.
Swayze, G.A.; Clark, R.N.; Goetz, A.F.H.; Chrien, T.H.; Gorelick, N.S.
2003-01-01
Estimates of spectrometer band pass, sampling interval, and signal-to-noise ratio required for identification of pure minerals and plants were derived using reflectance spectra convolved to AVIRIS, HYDICE, MIVIS, VIMS, and other imaging spectrometers. For each spectral simulation, various levels of random noise were added to the reflectance spectra after convolution, and then each was analyzed with the Tetracorder spectra identification algorithm [Clark et al., 2003]. The outcome of each identification attempt was tabulated to provide an estimate of the signal-to-noise ratio at which a given percentage of the noisy spectra were identified correctly. Results show that spectral identification is most sensitive to the signal-to-noise ratio at narrow sampling interval values but is more sensitive to the sampling interval itself at broad sampling interval values because of spectral aliasing, a condition when absorption features of different materials can resemble one another. The band pass is less critical to spectral identification than the sampling interval or signal-to-noise ratio because broadening the band pass does not induce spectral aliasing. These conclusions are empirically corroborated by analysis of mineral maps of AVIRIS data collected at Cuprite, Nevada, between 1990 and 1995, a period during which the sensor signal-to-noise ratio increased up to sixfold. There are values of spectrometer sampling and band pass beyond which spectral identification of materials will require an abrupt increase in sensor signal-to-noise ratio due to the effects of spectral aliasing. Factors that control this threshold are the uniqueness of a material's diagnostic absorptions in terms of shape and wavelength isolation, and the spectral diversity of the materials found in nature and in the spectral library used for comparison. Array spectrometers provide the best data for identification when they critically sample spectra. The sampling interval should not be broadened to increase the signal-to-noise ratio in a photon-noise-limited system when high levels of accuracy are desired. It is possible, using this simulation method, to select optimum combinations of band-pass, sampling interval, and signal-to-noise ratio values for a particular application that maximize identification accuracy and minimize the volume of imaging data.
Spectral degree of polarization uniformity for polarization-sensitive OCT
NASA Astrophysics Data System (ADS)
Baumann, Bernhard; Zotter, Stefan; Pircher, Michael; Götzinger, Erich; Rauscher, Sabine; Glösmann, Martin; Lammer, Jan; Schmidt-Erfurth, Ursula; Gröger, Marion; Hitzenberger, Christoph K.
2015-12-01
Depolarization of light can be measured by polarization-sensitive optical coherence tomography (PS-OCT) and has been used to improve tissue discrimination as well as segmentation of pigmented structures. Most approaches to depolarization assessment for PS-OCT - such as the degree of polarization uniformity (DOPU) - rely on measuring the uniformity of polarization states using spatial evaluation kernels. In this article, we present a different approach which exploits the spectral dimension. We introduce the spectral DOPU for the pixelwise analysis of polarization state variations between sub-bands of the broadband light source spectrum. Alongside a comparison with conventional spatial and temporal DOPU algorithms, we demonstrate imaging in the healthy human retina, and apply the technique for contrasting hard exudates in diabetic retinopathy and investigating the pigment epithelium of the rat iris.
NASA Astrophysics Data System (ADS)
Salman, S. S.; Abbas, W. A.
2018-05-01
The goal of the study is to support analysis Enhancement of Resolution and study effect on classification methods on bands spectral information of specific and quantitative approaches. In this study introduce a method to enhancement resolution Landsat 8 of combining the bands spectral of 30 meters resolution with panchromatic band 8 of 15 meters resolution, because of importance multispectral imagery to extracting land - cover. Classification methods used in this study to classify several lands -covers recorded from OLI- 8 imagery. Two methods of Data mining can be classified as either supervised or unsupervised. In supervised methods, there is a particular predefined target, that means the algorithm learn which values of the target are associated with which values of the predictor sample. K-nearest neighbors and maximum likelihood algorithms examine in this work as supervised methods. In other hand, no sample identified as target in unsupervised methods, the algorithm of data extraction searches for structure and patterns between all the variables, represented by Fuzzy C-mean clustering method as one of the unsupervised methods, NDVI vegetation index used to compare the results of classification method, the percent of dense vegetation in maximum likelihood method give a best results.
NASA Astrophysics Data System (ADS)
Li, Qingli; Liu, Hongying; Wang, Yiting; Sun, Zhen; Guo, Fangmin; Zhu, Jianzhong
2014-12-01
Histological observation of dual-stained colon sections is usually performed by visual observation under a light microscope, or by viewing on a computer screen with the assistance of image processing software in both research and clinical settings. These traditional methods are usually not sufficient to reliably differentiate spatially overlapping chromogens generated by different dyes. Hyperspectral microscopic imaging technology offers a solution for these constraints as the hyperspectral microscopic images contain information that allows differentiation between spatially co-located chromogens with similar but different spectra. In this paper, a hyperspectral microscopic imaging (HMI) system is used to identify methyl green and nitrotetrazolium blue chloride in dual-stained colon sections. Hyperspectral microscopic images are captured and the normalized score algorithm is proposed to identify the stains and generate the co-expression results. Experimental results show that the proposed normalized score algorithm can generate more accurate co-localization results than the spectral angle mapper algorithm. The hyperspectral microscopic imaging technology can enhance the visualization of dual-stained colon sections, improve the contrast and legibility of each stain using their spectral signatures, which is helpful for pathologist performing histological analyses.
Joint estimation of 2D-DOA and frequency based on space-time matrix and conformal array.
Wan, Liang-Tian; Liu, Lu-Tao; Si, Wei-Jian; Tian, Zuo-Xi
2013-01-01
Each element in the conformal array has a different pattern, which leads to the performance deterioration of the conventional high resolution direction-of-arrival (DOA) algorithms. In this paper, a joint frequency and two-dimension DOA (2D-DOA) estimation algorithm for conformal array are proposed. The delay correlation function is used to suppress noise. Both spatial and time sampling are utilized to construct the spatial-time matrix. The frequency and 2D-DOA estimation are accomplished based on parallel factor (PARAFAC) analysis without spectral peak searching and parameter pairing. The proposed algorithm needs only four guiding elements with precise positions to estimate frequency and 2D-DOA. Other instrumental elements can be arranged flexibly on the surface of the carrier. Simulation results demonstrate the effectiveness of the proposed algorithm.
Asynchronous Gossip for Averaging and Spectral Ranking
NASA Astrophysics Data System (ADS)
Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh
2014-08-01
We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.
Wang, Shenghao; Zhang, Yuyan; Cao, Fuyi; Pei, Zhenying; Gao, Xuewei; Zhang, Xu; Zhao, Yong
2018-02-13
This paper presents a novel spectrum analysis tool named synergy adaptive moving window modeling based on immune clone algorithm (SA-MWM-ICA) considering the tedious and inconvenient labor involved in the selection of pre-processing methods and spectral variables by prior experience. In this work, immune clone algorithm is first introduced into the spectrum analysis field as a new optimization strategy, covering the shortage of the relative traditional methods. Based on the working principle of the human immune system, the performance of the quantitative model is regarded as antigen, and a special vector corresponding to the above mentioned antigen is regarded as antibody. The antibody contains a pre-processing method optimization region which is created by 11 decimal digits, and a spectrum variable optimization region which is formed by some moving windows with changeable width and position. A set of original antibodies are created by modeling with this algorithm. After calculating the affinity of these antibodies, those with high affinity will be selected to clone. The regulation for cloning is that the higher the affinity, the more copies will be. In the next step, another import operation named hyper-mutation is applied to the antibodies after cloning. Moreover, the regulation for hyper-mutation is that the lower the affinity, the more possibility will be. Several antibodies with high affinity will be created on the basis of these steps. Groups of simulated dataset, gasoline near-infrared spectra dataset, and soil near-infrared spectra dataset are employed to verify and illustrate the performance of SA-MWM-ICA. Analysis results show that the performance of the quantitative models adopted by SA-MWM-ICA are better especially for structures with relatively complex spectra than traditional models such as partial least squares (PLS), moving window PLS (MWPLS), genetic algorithm PLS (GAPLS), and pretreatment method classification and adjustable parameter changeable size moving window PLS (CA-CSMWPLS). The selected pre-processing methods and spectrum variables are easily explained. The proposed method will converge in few generations and can be used not only for near-infrared spectroscopy analysis but also for other similar spectral analysis, such as infrared spectroscopy. Copyright © 2017 Elsevier B.V. All rights reserved.
Evaluation of centroiding algorithm error for Nano-JASMINE
NASA Astrophysics Data System (ADS)
Hara, Takuji; Gouda, Naoteru; Yano, Taihei; Yamada, Yoshiyuki
2014-08-01
The Nano-JASMINE mission has been designed to perform absolute astrometric measurements with unprecedented accuracy; the end-of-mission parallax standard error is required to be of the order of 3 milli arc seconds for stars brighter than 7.5 mag in the zw-band(0.6μm-1.0μm) .These requirements set a stringent constraint on the accuracy of the estimation of the location of the stellar image on the CCD for each observation. However each stellar images have individual shape depend on the spectral energy distribution of the star, the CCD properties, and the optics and its associated wave front errors. So it is necessity that the centroiding algorithm performs a high accuracy in any observables. Referring to the study of Gaia, we use LSF fitting method for centroiding algorithm, and investigate systematic error of the algorithm for Nano-JASMINE. Furthermore, we found to improve the algorithm by restricting sample LSF when we use a Principle Component Analysis. We show that centroiding algorithm error decrease after adapted the method.
Li, Zhigang; Wang, Qiaoyun; Lv, Jiangtao; Ma, Zhenhe; Yang, Linjuan
2015-06-01
Spectroscopy is often applied when a rapid quantitative analysis is required, but one challenge is the translation of raw spectra into a final analysis. Derivative spectra are often used as a preliminary preprocessing step to resolve overlapping signals, enhance signal properties, and suppress unwanted spectral features that arise due to non-ideal instrument and sample properties. In this study, to improve quantitative analysis of near-infrared spectra, derivatives of noisy raw spectral data need to be estimated with high accuracy. A new spectral estimator based on singular perturbation technique, called the singular perturbation spectra estimator (SPSE), is presented, and the stability analysis of the estimator is given. Theoretical analysis and simulation experimental results confirm that the derivatives can be estimated with high accuracy using this estimator. Furthermore, the effectiveness of the estimator for processing noisy infrared spectra is evaluated using the analysis of beer spectra. The derivative spectra of the beer and the marzipan are used to build the calibration model using partial least squares (PLS) modeling. The results show that the PLS based on the new estimator can achieve better performance compared with the Savitzky-Golay algorithm and can serve as an alternative choice for quantitative analytical applications.
Generation, Validation, and Application of Abundance Map Reference Data for Spectral Unmixing
NASA Astrophysics Data System (ADS)
Williams, McKay D.
Reference data ("ground truth") maps traditionally have been used to assess the accuracy of imaging spectrometer classification algorithms. However, these reference data can be prohibitively expensive to produce, often do not include sub-pixel abundance estimates necessary to assess spectral unmixing algorithms, and lack published validation reports. Our research proposes methodologies to efficiently generate, validate, and apply abundance map reference data (AMRD) to airborne remote sensing scenes. We generated scene-wide AMRD for three different remote sensing scenes using our remotely sensed reference data (RSRD) technique, which spatially aggregates unmixing results from fine scale imagery (e.g., 1-m Ground Sample Distance (GSD)) to co-located coarse scale imagery (e.g., 10-m GSD or larger). We validated the accuracy of this methodology by estimating AMRD in 51 randomly-selected 10 m x 10 m plots, using seven independent methods and observers, including field surveys by two observers, imagery analysis by two observers, and RSRD using three algorithms. Results indicated statistically-significant differences between all versions of AMRD, suggesting that all forms of reference data need to be validated. Given these significant differences between the independent versions of AMRD, we proposed that the mean of all (MOA) versions of reference data for each plot and class were most likely to represent true abundances. We then compared each version of AMRD to MOA. Best case accuracy was achieved by a version of imagery analysis, which had a mean coverage area error of 2.0%, with a standard deviation of 5.6%. One of the RSRD algorithms was nearly as accurate, achieving a mean error of 3.0%, with a standard deviation of 6.3%, showing the potential of RSRD-based AMRD generation. Application of validated AMRD to specific coarse scale imagery involved three main parts: 1) spatial alignment of coarse and fine scale imagery, 2) aggregation of fine scale abundances to produce coarse scale imagery-specific AMRD, and 3) demonstration of comparisons between coarse scale unmixing abundances and AMRD. Spatial alignment was performed using our scene-wide spectral comparison (SWSC) algorithm, which aligned imagery with accuracy approaching the distance of a single fine scale pixel. We compared simple rectangular aggregation to coarse sensor point spread function (PSF) aggregation, and found that the PSF approach returned lower error, but that rectangular aggregation more accurately estimated true abundances at ground level. We demonstrated various metrics for comparing unmixing results to AMRD, including mean absolute error (MAE) and linear regression (LR). We additionally introduced reference data mean adjusted MAE (MA-MAE), and reference data confidence interval adjusted MAE (CIA-MAE), which account for known error in the reference data itself. MA-MAE analysis indicated that fully constrained linear unmixing of coarse scale imagery across all three scenes returned an error of 10.83% per class and pixel, with regression analysis yielding a slope = 0.85, intercept = 0.04, and R2 = 0.81. Our reference data research has demonstrated a viable methodology to efficiently generate, validate, and apply AMRD to specific examples of airborne remote sensing imagery, thereby enabling direct quantitative assessment of spectral unmixing performance.
Nasirudin, Radin A.; Mei, Kai; Panchev, Petar; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Fiebich, Martin; Noël, Peter B.
2015-01-01
Purpose The exciting prospect of Spectral CT (SCT) using photon-counting detectors (PCD) will lead to new techniques in computed tomography (CT) that take advantage of the additional spectral information provided. We introduce a method to reduce metal artifact in X-ray tomography by incorporating knowledge obtained from SCT into a statistical iterative reconstruction scheme. We call our method Spectral-driven Iterative Reconstruction (SPIR). Method The proposed algorithm consists of two main components: material decomposition and penalized maximum likelihood iterative reconstruction. In this study, the spectral data acquisitions with an energy-resolving PCD were simulated using a Monte-Carlo simulator based on EGSnrc C++ class library. A jaw phantom with a dental implant made of gold was used as an object in this study. A total of three dental implant shapes were simulated separately to test the influence of prior knowledge on the overall performance of the algorithm. The generated projection data was first decomposed into three basis functions: photoelectric absorption, Compton scattering and attenuation of gold. A pseudo-monochromatic sinogram was calculated and used as input in the reconstruction, while the spatial information of the gold implant was used as a prior. The results from the algorithm were assessed and benchmarked with state-of-the-art reconstruction methods. Results Decomposition results illustrate that gold implant of any shape can be distinguished from other components of the phantom. Additionally, the result from the penalized maximum likelihood iterative reconstruction shows that artifacts are significantly reduced in SPIR reconstructed slices in comparison to other known techniques, while at the same time details around the implant are preserved. Quantitatively, the SPIR algorithm best reflects the true attenuation value in comparison to other algorithms. Conclusion It is demonstrated that the combination of the additional information from Spectral CT and statistical reconstruction can significantly improve image quality, especially streaking artifacts caused by the presence of materials with high atomic numbers. PMID:25955019
Handwritten text line segmentation by spectral clustering
NASA Astrophysics Data System (ADS)
Han, Xuecheng; Yao, Hui; Zhong, Guoqiang
2017-02-01
Since handwritten text lines are generally skewed and not obviously separated, text line segmentation of handwritten document images is still a challenging problem. In this paper, we propose a novel text line segmentation algorithm based on the spectral clustering. Given a handwritten document image, we convert it to a binary image first, and then compute the adjacent matrix of the pixel points. We apply spectral clustering on this similarity metric and use the orthogonal kmeans clustering algorithm to group the text lines. Experiments on Chinese handwritten documents database (HIT-MW) demonstrate the effectiveness of the proposed method.
Utilization of all Spectral Channels of IASI for the Retrieval of the Atmospheric State
NASA Astrophysics Data System (ADS)
Del Bianco, S.; Cortesi, U.; Carli, B.
2010-12-01
The retrieval of atmospheric state parameters from broadband measurements acquired by high spectral resolution sensors, such as the Infrared Atmospheric Sounding Interferometer (IASI) onboard the Meteorological Operational (MetOp) platform, generally requires to deal with a prohibitively large number of spectral elements available from a single observation (8461 samples in the case of IASI, covering the 645-2760 cm-1 range with a resolution of 0.5 cm-1 and a spectral sampling of 0.25 cm-1). Most inversion algorithms developed for both operational and scientific analysis of IASI spectra perform a reduction of the data - typically based on channel selection, super-channel clustering or Principal Component Analysis (PCA) techniques - in order to handle the high dimensionality of the problem. Accordingly, simultaneous processing of all IASI channels received relatively low attention. Here we prove the feasibility of a retrieval approach exploiting all spectral channels of IASI, to extract information on water vapor, temperature and ozone profiles. This multi-target retrieval removes the systematic errors due to interfering parameters and makes the channel selection no longer necessary. The challenging computation is made possible by the use of a coarse spectral grid for the forward model calculation and by the abatement of the associated modeling errors through the use of a variance-covariance matrix of the residuals that takes into account all the forward model errors.
Device, Algorithm and Integrated Modeling Research for Performance-Drive Multi-Modal Optical Sensors
2012-12-17
to!feature!aided!tracking! using !spectral! information .! ! !iii! •! A!novel!technique!for!spectral!waveband!selection!was!developed!and! used !as! part! of ... of !spectral! information ! using !the!tunable!single;pixel!spectrometer!concept.! •! A! database! was! developed! of ! spectral! reflectance! measurements...exploring! the! utility! of ! spectral! and! polarimetric! information !to!help!with!the!vehicle!tracking!application.!Through!the! use ! of ! both
NASA Astrophysics Data System (ADS)
Nishimura, Takahiro; Kimura, Hitoshi; Ogura, Yusuke; Tanida, Jun
2018-06-01
This paper presents an experimental assessment and analysis of super-resolution microscopy based on multiple-point spread function fitting of spectrally demultiplexed images using a designed DNA structure as a test target. For the purpose, a DNA structure was designed to have binding sites at a certain interval that is smaller than the diffraction limit. The structure was labeled with several types of quantum dots (QDs) to acquire their spatial information as spectrally encoded images. The obtained images are analyzed with a point spread function multifitting algorithm to determine the QD locations that indicate the binding site positions. The experimental results show that the labeled locations can be observed beyond the diffraction-limited resolution using three-colored fluorescence images that were obtained with a confocal fluorescence microscope. Numerical simulations show that labeling with eight types of QDs enables the positions aligned at 27.2-nm pitches on the DNA structure to be resolved with high accuracy.
Ichimaru, Y; Yanaga, T
1989-06-01
Spectral analysis of heart rates during 24-hr ambulatory electrocardiographic monitoring has been carried out to characterize the heart rate spectral components of Cheyne-Stokes respiration (CSR) by using fast Fourier transformation (FFT). Eight patients with congestive heart failure were selected for the study. FFT analyses have been performed for 614.4 sec. Out of the power spectrum, five parameters were extracted to characterize the CSR. The low peak frequencies in eight subjects were between 0.0179 Hz (56 sec) and 0.0081 Hz (123 sec). The algorithms used to detect CSR are the followings: (i) if the LFPA/ULFA ratios were above the absolute value of 1.0, and (ii) the LFPP/MLFP ratios were above the absolute values of 4.0, then the power spectrum is suggestive of CSR. We conclude that the automatic detection of CSR by heart rate spectral analysis during ambulatory ECG monitoring may afford a tool for the evaluation of the patients with congestive heart failure.
Characterizing multivariate decoding models based on correlated EEG spectral features
McFarland, Dennis J.
2013-01-01
Objective Multivariate decoding methods are popular techniques for analysis of neurophysiological data. The present study explored potential interpretative problems with these techniques when predictors are correlated. Methods Data from sensorimotor rhythm-based cursor control experiments was analyzed offline with linear univariate and multivariate models. Features were derived from autoregressive (AR) spectral analysis of varying model order which produced predictors that varied in their degree of correlation (i.e., multicollinearity). Results The use of multivariate regression models resulted in much better prediction of target position as compared to univariate regression models. However, with lower order AR features interpretation of the spectral patterns of the weights was difficult. This is likely to be due to the high degree of multicollinearity present with lower order AR features. Conclusions Care should be exercised when interpreting the pattern of weights of multivariate models with correlated predictors. Comparison with univariate statistics is advisable. Significance While multivariate decoding algorithms are very useful for prediction their utility for interpretation may be limited when predictors are correlated. PMID:23466267
Thomas, Robert Joseph; Mietus, Joseph E.; Peng, Chung-Kang; Gilmartin, Geoffrey; Daly, Robert W.; Goldberger, Ary L.; Gottlieb, Daniel J.
2007-01-01
Study Objectives: Complex sleep apnea is defined as sleep disordered breathing secondary to simultaneous upper airway obstruction and respiratory control dysfunction. The objective of this study was to assess the utility of an electrocardiogram (ECG)-based cardiopulmonary coupling technique to distinguish obstructive from central or complex sleep apnea. Design: Analysis of archived polysomnographic datasets. Setting: A laboratory for computational signal analysis. Interventions: None. Measurements and Results: The PhysioNet Sleep Apnea Database, consisting of 70 polysomnograms including single-lead ECG signals of approximately 8 hours duration, was used to train an ECG-based measure of autonomic and respiratory interactions (cardiopulmonary coupling) to detect periods of apnea and hypopnea, based on the presence of elevated low-frequency coupling (e-LFC). In the PhysioNet BIDMC Congestive Heart Failure Database (ECGs of 15 subjects), a pattern of “narrow spectral band” e-LFC was especially common. The algorithm was then applied to the Sleep Heart Health Study–I dataset, to select the 15 records with the highest amounts of broad and narrow spectral band e-LFC. The latter spectral characteristic seemed to detect not only periods of central apnea, but also obstructive hypopneas with a periodic breathing pattern. Applying the algorithm to 77 sleep laboratory split-night studies showed that the presence of narrow band e-LFC predicted an increased sensitivity to induction of central apneas by positive airway pressure. Conclusions: ECG-based spectral analysis allows automated, operator-independent characterization of probable interactions between respiratory dyscontrol and upper airway anatomical obstruction. The clinical utility of spectrographic phenotyping, especially in predicting failure of positive airway pressure therapy, remains to be more thoroughly tested. Citation: Thomas RJ; Mietus JE; Peng CK; Gilmartin G; Daly RW; Goldberger AL; Gottlieb DJ. Differentiating obstructive from central and complex sleep apnea using an automated electrocardiogram-based method. SLEEP 2007;30(12):1756-1769. PMID:18246985
Liu, Wei; Wang, Zhen-Zhong; Qing, Jian-Ping; Li, Hong-Juan; Xiao, Wei
2014-01-01
Background: Peach kernels which contain kinds of fatty acids play an important role in the regulation of a variety of physiological and biological functions. Objective: To establish an innovative and rapid diffuse reflectance near-infrared spectroscopy (DR-NIR) analysis method along with chemometric techniques for the qualitative and quantitative determination of a peach kernel. Materials and Methods: Peach kernel samples from nine different origins were analyzed with high-performance liquid chromatography (HPLC) as a reference method. DR-NIR is in the spectral range 1100-2300 nm. Principal component analysis (PCA) and partial least squares regression (PLSR) algorithm were applied to obtain prediction models, The Savitzky-Golay derivative and first derivative were adopted for the spectral pre-processing, PCA was applied to classify the varieties of those samples. For the quantitative calibration, the models of linoleic and oleinic acids were established with the PLSR algorithm and the optimal principal component (PC) numbers were selected with leave-one-out (LOO) cross-validation. The established models were evaluated with the root mean square error of deviation (RMSED) and corresponding correlation coefficients (R2). Results: The PCA results of DR-NIR spectra yield clear classification of the two varieties of peach kernel. PLSR had a better predictive ability. The correlation coefficients of the two calibration models were above 0.99, and the RMSED of linoleic and oleinic acids were 1.266% and 1.412%, respectively. Conclusion: The DR-NIR combined with PCA and PLSR algorithm could be used efficiently to identify and quantify peach kernels and also help to solve variety problem. PMID:25422544
Novel hyperspectral prediction method and apparatus
NASA Astrophysics Data System (ADS)
Kemeny, Gabor J.; Crothers, Natalie A.; Groth, Gard A.; Speck, Kathy A.; Marbach, Ralf
2009-05-01
Both the power and the challenge of hyperspectral technologies is the very large amount of data produced by spectral cameras. While off-line methodologies allow the collection of gigabytes of data, extended data analysis sessions are required to convert the data into useful information. In contrast, real-time monitoring, such as on-line process control, requires that compression of spectral data and analysis occur at a sustained full camera data rate. Efficient, high-speed practical methods for calibration and prediction are therefore sought to optimize the value of hyperspectral imaging. A novel method of matched filtering known as science based multivariate calibration (SBC) was developed for hyperspectral calibration. Classical (MLR) and inverse (PLS, PCR) methods are combined by spectroscopically measuring the spectral "signal" and by statistically estimating the spectral "noise." The accuracy of the inverse model is thus combined with the easy interpretability of the classical model. The SBC method is optimized for hyperspectral data in the Hyper-CalTM software used for the present work. The prediction algorithms can then be downloaded into a dedicated FPGA based High-Speed Prediction EngineTM module. Spectral pretreatments and calibration coefficients are stored on interchangeable SD memory cards, and predicted compositions are produced on a USB interface at real-time camera output rates. Applications include minerals, pharmaceuticals, food processing and remote sensing.
NASA Technical Reports Server (NTRS)
Hussaini, M. Y. (Editor); Kumar, A. (Editor); Salas, M. D. (Editor)
1993-01-01
The purpose here is to assess the state of the art in the areas of numerical analysis that are particularly relevant to computational fluid dynamics (CFD), to identify promising new developments in various areas of numerical analysis that will impact CFD, and to establish a long-term perspective focusing on opportunities and needs. Overviews are given of discretization schemes, computational fluid dynamics, algorithmic trends in CFD for aerospace flow field calculations, simulation of compressible viscous flow, and massively parallel computation. Also discussed are accerelation methods, spectral and high-order methods, multi-resolution and subcell resolution schemes, and inherently multidimensional schemes.
Schmidt, K; Witte, H
1999-11-01
Recently the assumption of the independence of individual frequency components in a signal has been rejected, for example, for the EEG during defined physiological states such as sleep or sedation [9, 10]. Thus, the use of higher-order spectral analysis capable of detecting interrelations between individual signal components has proved useful. The aim of the present study was to investigate the quality of various non-parametric and parametric estimation algorithms using simulated as well as true physiological data. We employed standard algorithms available for the MATLAB. The results clearly show that parametric bispectral estimation is superior to non-parametric estimation in terms of the quality of peak localisation and the discrimination from other peaks.
Estimating gene function with least squares nonnegative matrix factorization.
Wang, Guoli; Ochs, Michael F
2007-01-01
Nonnegative matrix factorization is a machine learning algorithm that has extracted information from data in a number of fields, including imaging and spectral analysis, text mining, and microarray data analysis. One limitation with the method for linking genes through microarray data in order to estimate gene function is the high variance observed in transcription levels between different genes. Least squares nonnegative matrix factorization uses estimates of the uncertainties on the mRNA levels for each gene in each condition, to guide the algorithm to a local minimum in normalized chi2, rather than a Euclidean distance or divergence between the reconstructed data and the data itself. Herein, application of this method to microarray data is demonstrated in order to predict gene function.
Naguib, Ibrahim A; Abdelrahman, Maha M; El Ghobashy, Mohamed R; Ali, Nesma A
2016-01-01
Two accurate, sensitive, and selective stability-indicating methods are developed and validated for simultaneous quantitative determination of agomelatine (AGM) and its forced degradation products (Deg I and Deg II), whether in pure forms or in pharmaceutical formulations. Partial least-squares regression (PLSR) and spectral residual augmented classical least-squares (SRACLS) are two chemometric models that are being subjected to a comparative study through handling UV spectral data in range (215-350 nm). For proper analysis, a three-factor, four-level experimental design was established, resulting in a training set consisting of 16 mixtures containing different ratios of interfering species. An independent test set consisting of eight mixtures was used to validate the prediction ability of the suggested models. The results presented indicate the ability of mentioned multivariate calibration models to analyze AGM, Deg I, and Deg II with high selectivity and accuracy. The analysis results of the pharmaceutical formulations were statistically compared to the reference HPLC method, with no significant differences observed regarding accuracy and precision. The SRACLS model gives comparable results to the PLSR model; however, it keeps the qualitative spectral information of the classical least-squares algorithm for analyzed components.
Spatial-Spectral Approaches to Edge Detection in Hyperspectral Remote Sensing
NASA Astrophysics Data System (ADS)
Cox, Cary M.
This dissertation advances geoinformation science at the intersection of hyperspectral remote sensing and edge detection methods. A relatively new phenomenology among its remote sensing peers, hyperspectral imagery (HSI) comprises only about 7% of all remote sensing research - there are five times as many radar-focused peer reviewed journal articles than hyperspectral-focused peer reviewed journal articles. Similarly, edge detection studies comprise only about 8% of image processing research, most of which is dedicated to image processing techniques most closely associated with end results, such as image classification and feature extraction. Given the centrality of edge detection to mapping, that most important of geographic functions, improving the collective understanding of hyperspectral imagery edge detection methods constitutes a research objective aligned to the heart of geoinformation sciences. Consequently, this dissertation endeavors to narrow the HSI edge detection research gap by advancing three HSI edge detection methods designed to leverage HSI's unique chemical identification capabilities in pursuit of generating accurate, high-quality edge planes. The Di Zenzo-based gradient edge detection algorithm, an innovative version of the Resmini HySPADE edge detection algorithm and a level set-based edge detection algorithm are tested against 15 traditional and non-traditional HSI datasets spanning a range of HSI data configurations, spectral resolutions, spatial resolutions, bandpasses and applications. This study empirically measures algorithm performance against Dr. John Canny's six criteria for a good edge operator: false positives, false negatives, localization, single-point response, robustness to noise and unbroken edges. The end state is a suite of spatial-spectral edge detection algorithms that produce satisfactory edge results against a range of hyperspectral data types applicable to a diverse set of earth remote sensing applications. This work also explores the concept of an edge within hyperspectral space, the relative importance of spatial and spectral resolutions as they pertain to HSI edge detection and how effectively compressed HSI data improves edge detection results. The HSI edge detection experiments yielded valuable insights into the algorithms' strengths, weaknesses and optimal alignment to remote sensing applications. The gradient-based edge operator produced strong edge planes across a range of evaluation measures and applications, particularly with respect to false negatives, unbroken edges, urban mapping, vegetation mapping and oil spill mapping applications. False positives and uncompressed HSI data presented occasional challenges to the algorithm. The HySPADE edge operator produced satisfactory results with respect to localization, single-point response, oil spill mapping and trace chemical detection, and was challenged by false positives, declining spectral resolution and vegetation mapping applications. The level set edge detector produced high-quality edge planes for most tests and demonstrated strong performance with respect to false positives, single-point response, oil spill mapping and mineral mapping. False negatives were a regular challenge for the level set edge detection algorithm. Finally, HSI data optimized for spectral information compression and noise was shown to improve edge detection performance across all three algorithms, while the gradient-based algorithm and HySPADE demonstrated significant robustness to declining spectral and spatial resolutions.
NASA Astrophysics Data System (ADS)
Gambacorta, A.; Nalli, N. R.; Tan, C.; Iturbide-Sanchez, F.; Wilson, M.; Zhang, K.; Xiong, X.; Barnet, C. D.; Sun, B.; Zhou, L.; Wheeler, A.; Reale, A.; Goldberg, M.
2017-12-01
The NOAA Unique Combined Atmospheric Processing System (NUCAPS) is the NOAA operational algorithm to retrieve thermodynamic and composition variables from hyper spectral thermal sounders such as CrIS, IASI and AIRS. The combined use of microwave sounders, such as ATMS, AMSU and MHS, enables full atmospheric sounding of the atmospheric column under all-sky conditions. NUCAPS retrieval products are accessible in near real time (about 1.5 hour delay) through the NOAA Comprehensive Large Array-data Stewardship System (CLASS). Since February 2015, NUCAPS retrievals have been also accessible via Direct Broadcast, with unprecedented low latency of less than 0.5 hours. NUCAPS builds on a long-term, multi-agency investment on algorithm research and development. The uniqueness of this algorithm consists in a number of features that are key in providing highly accurate and stable atmospheric retrievals, suitable for real time weather and air quality applications. Firstly, maximizing the use of the information content present in hyper spectral thermal measurements forms the foundation of the NUCAPS retrieval algorithm. Secondly, NUCAPS is a modular, name-list driven design. It can process multiple hyper spectral infrared sounders (on Aqua, NPP, MetOp and JPSS series) by mean of the same exact retrieval software executable and underlying spectroscopy. Finally, a cloud-clearing algorithm and a synergetic use of microwave radiance measurements enable full vertical sounding of the atmosphere, under all-sky regimes. As we transition toward improved hyper spectral missions, assessing retrieval skill and consistency across multiple platforms becomes a priority for real time users applications. Focus of this presentation is a general introduction on the recent improvements in the delivery of the NUCAPS full spectral resolution upgrade and an overview of the lessons learned from the 2017 Hazardous Weather Test bed Spring Experiment. Test cases will be shown on the use of NPP and MetOp NUCAPS under pre-convective, capping inversion and dry layer intrusion events.
[Quantitative Analysis of Heavy Metals in Water with LIBS Based on Signal-to-Background Ratio].
Hu, Li; Zhao, Nan-jing; Liu, Wen-qing; Fang, Li; Zhang, Da-hai; Wang, Yin; Meng, De Shuo; Yu, Yang; Ma, Ming-jun
2015-07-01
There are many influence factors in the precision and accuracy of the quantitative analysis with LIBS technology. According to approximately the same characteristics trend of background spectrum and characteristic spectrum along with the change of temperature through in-depth analysis, signal-to-background ratio (S/B) measurement and regression analysis could compensate the spectral line intensity changes caused by system parameters such as laser power, spectral efficiency of receiving. Because the measurement dates were limited and nonlinear, we used support vector machine (SVM) for regression algorithm. The experimental results showed that the method could improve the stability and the accuracy of quantitative analysis of LIBS, and the relative standard deviation and average relative error of test set respectively were 4.7% and 9.5%. Data fitting method based on signal-to-background ratio(S/B) is Less susceptible to matrix elements and background spectrum etc, and provides data processing reference for real-time online LIBS quantitative analysis technology.
GOME Total Ozone and Calibration Error Derived Usign Version 8 TOMS Algorithm
NASA Technical Reports Server (NTRS)
Gleason, J.; Wellemeyer, C.; Qin, W.; Ahn, C.; Gopalan, A.; Bhartia, P.
2003-01-01
The Global Ozone Monitoring Experiment (GOME) is a hyper-spectral satellite instrument measuring the ultraviolet backscatter at relatively high spectral resolution. GOME radiances have been slit averaged to emulate measurements of the Total Ozone Mapping Spectrometer (TOMS) made at discrete wavelengths and processed using the new TOMS Version 8 Ozone Algorithm. Compared to Differential Optical Absorption Spectroscopy (DOAS) techniques based on local structure in the Huggins Bands, the TOMS uses differential absorption between a pair of wavelengths including the local stiucture as well as the background continuum. This makes the TOMS Algorithm more sensitive to ozone, but it also makes the algorithm more sensitive to instrument calibration errors. While calibration adjustments are not needed for the fitting techniques like the DOAS employed in GOME algorithms, some adjustment is necessary when applying the TOMS Algorithm to GOME. Using spectral discrimination at near ultraviolet wavelength channels unabsorbed by ozone, the GOME wavelength dependent calibration drift is estimated and then checked using pair justification. In addition, the day one calibration offset is estimated based on the residuals of the Version 8 TOMS Algorithm. The estimated drift in the 2b detector of GOME is small through the first four years and then increases rapidly to +5% in normalized radiance at 331 nm relative to 385 nm by mid 2000. The lb detector appears to be quite well behaved throughout this time period.
Constrained spectral clustering under a local proximity structure assumption
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri; Xu, Qianjun; des Jardins, Marie
2005-01-01
This work focuses on incorporating pairwise constraints into a spectral clustering algorithm. A new constrained spectral clustering method is proposed, as well as an active constraint acquisition technique and a heuristic for parameter selection. We demonstrate that our constrained spectral clustering method, CSC, works well when the data exhibits what we term local proximity structure.
An efficient quantum algorithm for spectral estimation
NASA Astrophysics Data System (ADS)
Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth
2017-03-01
We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.
NASA Astrophysics Data System (ADS)
Mechlem, Korbinian; Ehn, Sebastian; Sellerer, Thorsten; Pfeiffer, Franz; Noël, Peter B.
2017-03-01
In spectral computed tomography (spectral CT), the additional information about the energy dependence of attenuation coefficients can be exploited to generate material selective images. These images have found applications in various areas such as artifact reduction, quantitative imaging or clinical diagnosis. However, significant noise amplification on material decomposed images remains a fundamental problem of spectral CT. Most spectral CT algorithms separate the process of material decomposition and image reconstruction. Separating these steps is suboptimal because the full statistical information contained in the spectral tomographic measurements cannot be exploited. Statistical iterative reconstruction (SIR) techniques provide an alternative, mathematically elegant approach to obtaining material selective images with improved tradeoffs between noise and resolution. Furthermore, image reconstruction and material decomposition can be performed jointly. This is accomplished by a forward model which directly connects the (expected) spectral projection measurements and the material selective images. To obtain this forward model, detailed knowledge of the different photon energy spectra and the detector response was assumed in previous work. However, accurately determining the spectrum is often difficult in practice. In this work, a new algorithm for statistical iterative material decomposition is presented. It uses a semi-empirical forward model which relies on simple calibration measurements. Furthermore, an efficient optimization algorithm based on separable surrogate functions is employed. This partially negates one of the major shortcomings of SIR, namely high computational cost and long reconstruction times. Numerical simulations and real experiments show strongly improved image quality and reduced statistical bias compared to projection-based material decomposition.
Spectral Unmixing Based Construction of Lunar Mineral Abundance Maps
NASA Astrophysics Data System (ADS)
Bernhardt, V.; Grumpe, A.; Wöhler, C.
2017-07-01
In this study we apply a nonlinear spectral unmixing algorithm to a nearly global lunar spectral reflectance mosaic derived from hyper-spectral image data acquired by the Moon Mineralogy Mapper (M3) instrument. Corrections for topographic effects and for thermal emission were performed. A set of 19 laboratory-based reflectance spectra of lunar samples published by the Lunar Soil Characterization Consortium (LSCC) were used as a catalog of potential endmember spectra. For a given spectrum, the multi-population population-based incremental learning (MPBIL) algorithm was used to determine the subset of endmembers actually contained in it. However, as the MPBIL algorithm is computationally expensive, it cannot be applied to all pixels of the reflectance mosaic. Hence, the reflectance mosaic was clustered into a set of 64 prototype spectra, and the MPBIL algorithm was applied to each prototype spectrum. Each pixel of the mosaic was assigned to the most similar prototype, and the set of endmembers previously determined for that prototype was used for pixel-wise nonlinear spectral unmixing using the Hapke model, implemented as linear unmixing of the single-scattering albedo spectrum. This procedure yields maps of the fractional abundances of the 19 endmembers. Based on the known modal abundances of a variety of mineral species in the LSCC samples, a conversion from endmember abundances to mineral abundances was performed. We present maps of the fractional abundances of plagioclase, pyroxene and olivine and compare our results with previously published lunar mineral abundance maps.
Inverse analysis of non-uniform temperature distributions using multispectral pyrometry
NASA Astrophysics Data System (ADS)
Fu, Tairan; Duan, Minghao; Tian, Jibin; Shi, Congling
2016-05-01
Optical diagnostics can be used to obtain sub-pixel temperature information in remote sensing. A multispectral pyrometry method was developed using multiple spectral radiation intensities to deduce the temperature area distribution in the measurement region. The method transforms a spot multispectral pyrometer with a fixed field of view into a pyrometer with enhanced spatial resolution that can give sub-pixel temperature information from a "one pixel" measurement region. A temperature area fraction function was defined to represent the spatial temperature distribution in the measurement region. The method is illustrated by simulations of a multispectral pyrometer with a spectral range of 8.0-13.0 μm measuring a non-isothermal region with a temperature range of 500-800 K in the spot pyrometer field of view. The inverse algorithm for the sub-pixel temperature distribution (temperature area fractions) in the "one pixel" verifies this multispectral pyrometry method. The results show that an improved Levenberg-Marquardt algorithm is effective for this ill-posed inverse problem with relative errors in the temperature area fractions of (-3%, 3%) for most of the temperatures. The analysis provides a valuable reference for the use of spot multispectral pyrometers for sub-pixel temperature distributions in remote sensing measurements.
Swain, Ratnakar; Sahoo, Bhabagrahi
2017-05-01
For river water quality monitoring at 30m × 1-day spatio-temporal scales, a spatial and temporal adaptive reflectance fusion model (STARFM) is developed for estimating turbidity (T u ), total suspended solid (TSS), and six heavy metals (HV) of iron, zinc, copper, chromium, lead and cadmium, by blending the Moderate-Resolution Imaging Spectroradiometer (MODIS) and Landsat (L s ) spectral bands. A combination of regression analysis and genetic algorithm (GA) techniques are applied to develop spectral relationships between T u -L s , TSS-T u , and each HV-TSS. The STARFM algorithm and all the developed relationship models are evaluated satisfactorily by various performance evaluation measures to develop heavy metal pollution index-based vulnerability maps at 1-km resolution in the Brahmani River in eastern India. The Monte-Carlo simulation based analysis of the developed formulations reveals that the uncertainty in estimating Zn and Cd is the minimum (1.04%) and the maximum (5.05%), respectively. Hence, the remote sensing based approach developed herein can effectively be used in many world rivers for real-time monitoring of heavy metal pollution. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chauhan, H.; Krishna Mohan, B.
2014-11-01
The present study was undertaken with the objective to check effectiveness of spectral similarity measures to develop precise crop spectra from the collected hyperspectral field spectra. In Multispectral and Hyperspectral remote sensing, classification of pixels is obtained by statistical comparison (by means of spectral similarity) of known field or library spectra to unknown image spectra. Though these algorithms are readily used, little emphasis has been placed on use of various spectral similarity measures to select precise crop spectra from the set of field spectra. Conventionally crop spectra are developed after rejecting outliers based only on broad-spectrum analysis. Here a successful attempt has been made to develop precise crop spectra based on spectral similarity. As unevaluated data usage leads to uncertainty in the image classification, it is very crucial to evaluate the data. Hence, notwithstanding the conventional method, the data precision has been performed effectively to serve the purpose of the present research work. The effectiveness of developed precise field spectra was evaluated by spectral discrimination measures and found higher discrimination values compared to spectra developed conventionally. Overall classification accuracy for the image classified by field spectra selected conventionally is 51.89% and 75.47% for the image classified by field spectra selected precisely based on spectral similarity. KHAT values are 0.37, 0.62 and Z values are 2.77, 9.59 for image classified using conventional and precise field spectra respectively. Reasonable higher classification accuracy, KHAT and Z values shows the possibility of a new approach for field spectra selection based on spectral similarity measure.
Les résonances d'un trou noir de Schwarzschild.
NASA Astrophysics Data System (ADS)
Bachelot, A.; Motet-Bachelot, A.
1993-09-01
This paper is devoted to the theoretical and computational investigations of the scattering frequencies of scalar, electromagnetic, gravitational waves around a spherical black hole. The authors adopt a time dependent approach: construction of wave operators for the hyperbolic Regge-Wheeler equation; asymptotic completeness; outgoing and incoming spectral representations; meromorphic continuation of the Heisenberg matrix; approximation by dumping and cut-off of the potentials and interpretation of the semi group Z(t) in the framework of the membrane paradigma. They develop a new procedure for the computation of the resonances by the spectral analysis of the transient scattered wave, based on Prony's algorithm.
[Lithology feature extraction of CASI hyperspectral data based on fractal signal algorithm].
Tang, Chao; Chen, Jian-Ping; Cui, Jing; Wen, Bo-Tao
2014-05-01
Hyperspectral data is characterized by combination of image and spectrum and large data volume dimension reduction is the main research direction. Band selection and feature extraction is the primary method used for this objective. In the present article, the authors tested methods applied for the lithology feature extraction from hyperspectral data. Based on the self-similarity of hyperspectral data, the authors explored the application of fractal algorithm to lithology feature extraction from CASI hyperspectral data. The "carpet method" was corrected and then applied to calculate the fractal value of every pixel in the hyperspectral data. The results show that fractal information highlights the exposed bedrock lithology better than the original hyperspectral data The fractal signal and characterized scale are influenced by the spectral curve shape, the initial scale selection and iteration step. At present, research on the fractal signal of spectral curve is rare, implying the necessity of further quantitative analysis and investigation of its physical implications.
Zheng, Hai-ming; Li, Guang-jie; Wu, Hao
2015-06-01
Differential optical absorption spectroscopy (DOAS) is a commonly used atmospheric pollution monitoring method. Denoising of monitoring spectral data will improve the inversion accuracy. Fourier transform filtering method is effectively capable of filtering out the noise in the spectral data. But the algorithm itself can introduce errors. In this paper, a chirp-z transform method is put forward. By means of the local thinning of Fourier transform spectrum, it can retain the denoising effect of Fourier transform and compensate the error of the algorithm, which will further improve the inversion accuracy. The paper study on the concentration retrieving of SO2 and NO2. The results show that simple division causes bigger error and is not very stable. Chirp-z transform is proved to be more accurate than Fourier transform. Results of the frequency spectrum analysis show that Fourier transform cannot solve the distortion and weakening problems of characteristic absorption spectrum. Chirp-z transform shows ability in fine refactoring of specific frequency spectrum.
NASA Technical Reports Server (NTRS)
Weisz, Elisabeth; Li, Jun; Li, Jinlong; Zhou, Daniel K.; Huang, Hung-Lung; Goldberg, Mitchell D.; Yang, Ping
2007-01-01
High-spectral resolution measurements from the Atmospheric Infrared Sounder (AIRS) onboard the EOS (Earth Observing System) Aqua satellite provide unique information about atmospheric state, surface and cloud properties. This paper presents an AIRS alone single field-of-view (SFOV) retrieval algorithm to simultaneously retrieve temperature, humidity and ozone profiles under all weather conditions, as well as cloud top pressure (CTP) and cloud optical thickness (COT) under cloudy skies. For optically thick cloud conditions the above-cloud soundings are derived, whereas for clear skies and optically thin cloud conditions the profiles are retrieved from 0.005 hPa down to the earth's surface. Initial validation has been conducted by using the operational MODIS (Moderate Resolution Imaging Spectroradiometer) product, ECMWF (European Center of Medium range Weather Forecasts) analysis fields and radiosonde observations (RAOBs). These inter-comparisons clearly demonstrate the potential of this algorithm to process data from 38 high-spectral infrared (IR) sounder instruments.
Huang, Tao; Li, Xiao-yu; Xu, Meng-ling; Jin, Rui; Ku, Jing; Xu, Sen-miao; Wu, Zhen-zhong
2015-01-01
The quality of potato is directly related to their edible value and industrial value. Hollow heart of potato, as a physiological disease occurred inside the tuber, is difficult to be detected. This paper put forward a non-destructive detection method by using semi-transmission hyperspectral imaging with support vector machine (SVM) to detect hollow heart of potato. Compared to reflection and transmission hyperspectral image, semi-transmission hyperspectral image can get clearer image which contains the internal quality information of agricultural products. In this study, 224 potato samples (149 normal samples and 75 hollow samples) were selected as the research object, and semi-transmission hyperspectral image acquisition system was constructed to acquire the hyperspectral images (390-1 040 nn) of the potato samples, and then the average spectrum of region of interest were extracted for spectral characteristics analysis. Normalize was used to preprocess the original spectrum, and prediction model were developed based on SVM using all wave bands, the accurate recognition rate of test set is only 87. 5%. In order to simplify the model competitive.adaptive reweighed sampling algorithm (CARS) and successive projection algorithm (SPA) were utilized to select important variables from the all 520 spectral variables and 8 variables were selected (454, 601, 639, 664, 748, 827, 874 and 936 nm). 94. 64% of the accurate recognition rate of test set was obtained by using the 8 variables to develop SVM model. Parameter optimization algorithms, including artificial fish swarm algorithm (AFSA), genetic algorithm (GA) and grid search algorithm, were used to optimize the SVM model parameters: penalty parameter c and kernel parameter g. After comparative analysis, AFSA, a new bionic optimization algorithm based on the foraging behavior of fish swarm, was proved to get the optimal model parameter (c=10. 659 1, g=0. 349 7), and the recognition accuracy of 10% were obtained for the AFSA-SVM model. The results indicate that combining the semi-transmission hyperspectral imaging technology with CARS-SPA and AFSA-SVM can accurately detect hollow heart of potato, and also provide technical support for rapid non-destructive detecting of hollow heart of potato.
2007-03-01
Quadrature QPSK Quadrature Phase-Shift Keying RV Random Variable SHAC Single-Hop-Observation Auto- Correlation SINR Signal-to-Interference...The fast Fourier transform ( FFT ) accumulation method and the strip spectral correlation algorithm subdivide the support region in the bi-frequency...diamond shapes, while the strip spectral correlation algorithm subdivides the region into strips. Each strip covers a number of the FFT accumulation
Designing a practical system for spectral imaging of skylight.
López-Alvarez, Miguel A; Hernández-Andrés, Javier; Romero, Javier; Lee, Raymond L
2005-09-20
In earlier work [J. Opt. Soc. Am. A 21, 13-23 (2004)], we showed that a combination of linear models and optimum Gaussian sensors obtained by an exhaustive search can recover daylight spectra reliably from broadband sensor data. Thus our algorithm and sensors could be used to design an accurate, relatively inexpensive system for spectral imaging of daylight. Here we improve our simulation of the multispectral system by (1) considering the different kinds of noise inherent in electronic devices such as change-coupled devices (CCDs) or complementary metal-oxide semiconductors (CMOS) and (2) extending our research to a different kind of natural illumination, skylight. Because exhaustive searches are expensive computationally, here we switch to a simulated annealing algorithm to define the optimum sensors for recovering skylight spectra. The annealing algorithm requires us to minimize a single cost function, and so we develop one that calculates both the spectral and colorimetric similarity of any pair of skylight spectra. We show that the simulated annealing algorithm yields results similar to the exhaustive search but with much less computational effort. Our technique lets us study the properties of optimum sensors in the presence of noise, one side effect of which is that adding more sensors may not improve the spectral recovery.
NASA Technical Reports Server (NTRS)
Arduini, R. F.; Aherron, R. M.; Samms, R. W.
1984-01-01
A computational model of the deterministic and stochastic processes involved in multispectral remote sensing was designed to evaluate the performance of sensor systems and data processing algorithms for spectral feature classification. Accuracy in distinguishing between categories of surfaces or between specific types is developed as a means to compare sensor systems and data processing algorithms. The model allows studies to be made of the effects of variability of the atmosphere and of surface reflectance, as well as the effects of channel selection and sensor noise. Examples of these effects are shown.
Preconditioned Mixed Spectral Element Methods for Elasticity and Stokes Problems
NASA Technical Reports Server (NTRS)
Pavarino, Luca F.
1996-01-01
Preconditioned iterative methods for the indefinite systems obtained by discretizing the linear elasticity and Stokes problems with mixed spectral elements in three dimensions are introduced and analyzed. The resulting stiffness matrices have the structure of saddle point problems with a penalty term, which is associated with the Poisson ratio for elasticity problems or with stabilization techniques for Stokes problems. The main results of this paper show that the convergence rate of the resulting algorithms is independent of the penalty parameter, the number of spectral elements Nu and mildly dependent on the spectral degree eta via the inf-sup constant. The preconditioners proposed for the whole indefinite system are block-diagonal and block-triangular. Numerical experiments presented in the final section show that these algorithms are a practical and efficient strategy for the iterative solution of the indefinite problems arising from mixed spectral element discretizations of elliptic systems.
Spectral Unmixing With Multiple Dictionaries
NASA Astrophysics Data System (ADS)
Cohen, Jeremy E.; Gillis, Nicolas
2018-02-01
Spectral unmixing aims at recovering the spectral signatures of materials, called endmembers, mixed in a hyperspectral or multispectral image, along with their abundances. A typical assumption is that the image contains one pure pixel per endmember, in which case spectral unmixing reduces to identifying these pixels. Many fully automated methods have been proposed in recent years, but little work has been done to allow users to select areas where pure pixels are present manually or using a segmentation algorithm. Additionally, in a non-blind approach, several spectral libraries may be available rather than a single one, with a fixed number (or an upper or lower bound) of endmembers to chose from each. In this paper, we propose a multiple-dictionary constrained low-rank matrix approximation model that address these two problems. We propose an algorithm to compute this model, dubbed M2PALS, and its performance is discussed on both synthetic and real hyperspectral images.
Wavelet compression techniques for hyperspectral data
NASA Technical Reports Server (NTRS)
Evans, Bruce; Ringer, Brian; Yeates, Mathew
1994-01-01
Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet transform coder was used for the two-dimensional compression. The third case used a three dimensional extension of this same algorithm.
NASA Astrophysics Data System (ADS)
Xie, ChengJun; Xu, Lin
2008-03-01
This paper presents a new algorithm based on mixing transform to eliminate redundancy, SHIRCT and subtraction mixing transform is used to eliminate spectral redundancy, 2D-CDF(2,2)DWT to eliminate spatial redundancy, This transform has priority in hardware realization convenience, since it can be fully implemented by add and shift operation. Its redundancy elimination effect is better than (1D+2D)CDF(2,2)DWT. Here improved SPIHT+CABAC mixing compression coding algorithm is used to implement compression coding. The experiment results show that in lossless image compression applications the effect of this method is a little better than the result acquired using (1D+2D)CDF(2,2)DWT+improved SPIHT+CABAC, still it is much better than the results acquired by JPEG-LS, WinZip, ARJ, DPCM, the research achievements of a research team of Chinese Academy of Sciences, NMST and MST. Using hyper-spectral image Canal of American JPL laboratory as the data set for lossless compression test, on the average the compression ratio of this algorithm exceeds the above algorithms by 42%,37%,35%,30%,16%,13%,11% respectively.
Mandarin Chinese Tone Identification in Cochlear Implants: Predictions from Acoustic Models
Morton, Kenneth D.; Torrione, Peter A.; Throckmorton, Chandra S.; Collins, Leslie M.
2015-01-01
It has been established that current cochlear implants do not supply adequate spectral information for perception of tonal languages. Comprehension of a tonal language, such as Mandarin Chinese, requires recognition of lexical tones. New strategies of cochlear stimulation such as variable stimulation rate and current steering may provide the means of delivering more spectral information and thus may provide the auditory fine structure required for tone recognition. Several cochlear implant signal processing strategies are examined in this study, the continuous interleaved sampling (CIS) algorithm, the frequency amplitude modulation encoding (FAME) algorithm, and the multiple carrier frequency algorithm (MCFA). These strategies provide different types and amounts of spectral information. Pattern recognition techniques can be applied to data from Mandarin Chinese tone recognition tasks using acoustic models as a means of testing the abilities of these algorithms to transmit the changes in fundamental frequency indicative of the four lexical tones. The ability of processed Mandarin Chinese tones to be correctly classified may predict trends in the effectiveness of different signal processing algorithms in cochlear implants. The proposed techniques can predict trends in performance of the signal processing techniques in quiet conditions but fail to do so in noise. PMID:18706497
Rainfall Estimation over the Nile Basin using Multi-Spectral, Multi- Instrument Satellite Techniques
NASA Astrophysics Data System (ADS)
Habib, E.; Kuligowski, R.; Sazib, N.; Elshamy, M.; Amin, D.; Ahmed, M.
2012-04-01
Management of Egypt's Aswan High Dam is critical not only for flood control on the Nile but also for ensuring adequate water supplies for most of Egypt since rainfall is scarce over the vast majority of its land area. However, reservoir inflow is driven by rainfall over Sudan, Ethiopia, Uganda, and several other countries from which routine rain gauge data are sparse. Satellite- derived estimates of rainfall offer a much more detailed and timely set of data to form a basis for decisions on the operation of the dam. A single-channel infrared (IR) algorithm is currently in operational use at the Egyptian Nile Forecast Center (NFC). In this study, the authors report on the adaptation of a multi-spectral, multi-instrument satellite rainfall estimation algorithm (Self- Calibrating Multivariate Precipitation Retrieval, SCaMPR) for operational application by NFC over the Nile Basin. The algorithm uses a set of rainfall predictors that come from multi-spectral Infrared cloud top observations and self-calibrate them to a set of predictands that come from the more accurate, but less frequent, Microwave (MW) rain rate estimates. For application over the Nile Basin, the SCaMPR algorithm uses multiple satellite IR channels that have become recently available to NFC from the Spinning Enhanced Visible and Infrared Imager (SEVIRI). Microwave rain rates are acquired from multiple sources such as the Special Sensor Microwave/Imager (SSM/I), the Special Sensor Microwave Imager and Sounder (SSMIS), the Advanced Microwave Sounding Unit (AMSU), the Advanced Microwave Scanning Radiometer on EOS (AMSR-E), and the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The algorithm has two main steps: rain/no-rain separation using discriminant analysis, and rain rate estimation using stepwise linear regression. We test two modes of algorithm calibration: real- time calibration with continuous updates of coefficients with newly coming MW rain rates, and calibration using static coefficients that are derived from IR-MW data from past observations. We also compare the SCaMPR algorithm to other global-scale satellite rainfall algorithms (e.g., 'Tropical Rainfall Measuring Mission (TRMM) and other sources' (TRMM-3B42) product, and the National Oceanographic and Atmospheric Administration Climate Prediction Center (NOAA-CPC) CMORPH product. The algorithm has several potential future applications such as: improving the performance accuracy of hydrologic forecasting models over the Nile Basin, and utilizing the enhanced rainfall datasets and better-calibrated hydrologic models to assess the impacts of climate change on the region's water availability using global circulation models and regional climate models.
Chandra Interactive Analysis of Observations (CIAO)
NASA Technical Reports Server (NTRS)
Dobrzycki, Adam
2000-01-01
The Chandra (formerly AXAF) telescope, launched on July 23, 1999, provides X-rays data with unprecedented spatial and spectral resolution. As part of the Chandra scientific support, the Chandra X-ray Observatory Center provides a new data analysis system, CIAO ("Chandra Interactive Analysis of Observations"). We will present the main components of the system: "First Look" analysis; SHERPA: a multi-dimensional, multi-mission modeling and fitting application; Chandra Imaging and Plotting System; Detect package-source detection algorithms; and DM package generic data manipulation tools, We will set up a demonstration of the portable version of the system and show examples of Chandra Data Analysis.
Speech enhancement on smartphone voice recording
NASA Astrophysics Data System (ADS)
Tris Atmaja, Bagus; Nur Farid, Mifta; Arifianto, Dhany
2016-11-01
Speech enhancement is challenging task in audio signal processing to enhance the quality of targeted speech signal while suppress other noises. In the beginning, the speech enhancement algorithm growth rapidly from spectral subtraction, Wiener filtering, spectral amplitude MMSE estimator to Non-negative Matrix Factorization (NMF). Smartphone as revolutionary device now is being used in all aspect of life including journalism; personally and professionally. Although many smartphones have two microphones (main and rear) the only main microphone is widely used for voice recording. This is why the NMF algorithm widely used for this purpose of speech enhancement. This paper evaluate speech enhancement on smartphone voice recording by using some algorithms mentioned previously. We also extend the NMF algorithm to Kulback-Leibler NMF with supervised separation. The last algorithm shows improved result compared to others by spectrogram and PESQ score evaluation.
Jacobi spectral Galerkin method for elliptic Neumann problems
NASA Astrophysics Data System (ADS)
Doha, E.; Bhrawy, A.; Abd-Elhameed, W.
2009-01-01
This paper is concerned with fast spectral-Galerkin Jacobi algorithms for solving one- and two-dimensional elliptic equations with homogeneous and nonhomogeneous Neumann boundary conditions. The paper extends the algorithms proposed by Shen (SIAM J Sci Comput 15:1489-1505, 1994) and Auteri et al. (J Comput Phys 185:427-444, 2003), based on Legendre polynomials, to Jacobi polynomials with arbitrary α and β. The key to the efficiency of our algorithms is to construct appropriate basis functions with zero slope at the endpoints, which lead to systems with sparse matrices for the discrete variational formulations. The direct solution algorithm developed for the homogeneous Neumann problem in two-dimensions relies upon a tensor product process. Nonhomogeneous Neumann data are accounted for by means of a lifting. Numerical results indicating the high accuracy and effectiveness of these algorithms are presented.
Optimizing interconnections to maximize the spectral radius of interdependent networks
NASA Astrophysics Data System (ADS)
Chen, Huashan; Zhao, Xiuyan; Liu, Feng; Xu, Shouhuai; Lu, Wenlian
2017-03-01
The spectral radius (i.e., the largest eigenvalue) of the adjacency matrices of complex networks is an important quantity that governs the behavior of many dynamic processes on the networks, such as synchronization and epidemics. Studies in the literature focused on bounding this quantity. In this paper, we investigate how to maximize the spectral radius of interdependent networks by optimally linking k internetwork connections (or interconnections for short). We derive formulas for the estimation of the spectral radius of interdependent networks and employ these results to develop a suite of algorithms that are applicable to different parameter regimes. In particular, a simple algorithm is to link the k nodes with the largest k eigenvector centralities in one network to the node in the other network with a certain property related to both networks. We demonstrate the applicability of our algorithms via extensive simulations. We discuss the physical implications of the results, including how the optimal interconnections can more effectively decrease the threshold of epidemic spreading in the susceptible-infected-susceptible model and the threshold of synchronization of coupled Kuramoto oscillators.
Multitaper scan-free spectrum estimation using a rotational shear interferometer.
Lepage, Kyle; Thomson, David J; Kraut, Shawn; Brady, David J
2006-05-01
Multitaper methods for a scan-free spectrum estimation that uses a rotational shear interferometer are investigated. Before source spectra can be estimated the sources must be detected. A source detection algorithm based upon the multitaper F-test is proposed. The algorithm is simulated, with additive, white Gaussian detector noise. A source with a signal-to-noise ratio (SNR) of 0.71 is detected 2.9 degrees from a source with a SNR of 70.1, with a significance level of 10(-4), approximately 4 orders of magnitude more significant than the source detection obtained with a standard detection algorithm. Interpolation and the use of prewhitening filters are investigated in the context of rotational shear interferometer (RSI) source spectra estimation. Finally, a multitaper spectrum estimator is proposed, simulated, and compared with untapered estimates. The multitaper estimate is found via simulation to distinguish a spectral feature with a SNR of 1.6 near a large spectral feature. The SNR of 1.6 spectral feature is not distinguished by the untapered spectrum estimate. The findings are consistent with the strong capability of the multitaper estimate to reduce out-of-band spectral leakage.
Multitaper scan-free spectrum estimation using a rotational shear interferometer
NASA Astrophysics Data System (ADS)
Lepage, Kyle; Thomson, David J.; Kraut, Shawn; Brady, David J.
2006-05-01
Multitaper methods for a scan-free spectrum estimation that uses a rotational shear interferometer are investigated. Before source spectra can be estimated the sources must be detected. A source detection algorithm based upon the multitaper F-test is proposed. The algorithm is simulated, with additive, white Gaussian detector noise. A source with a signal-to-noise ratio (SNR) of 0.71 is detected 2.9° from a source with a SNR of 70.1, with a significance level of 10-4, ˜4 orders of magnitude more significant than the source detection obtained with a standard detection algorithm. Interpolation and the use of prewhitening filters are investigated in the context of rotational shear interferometer (RSI) source spectra estimation. Finally, a multitaper spectrum estimator is proposed, simulated, and compared with untapered estimates. The multitaper estimate is found via simulation to distinguish a spectral feature with a SNR of 1.6 near a large spectral feature. The SNR of 1.6 spectral feature is not distinguished by the untapered spectrum estimate. The findings are consistent with the strong capability of the multitaper estimate to reduce out-of-band spectral leakage.
Hyperspectral data analysis procedures with reduced sensitivity to noise
NASA Technical Reports Server (NTRS)
Landgrebe, David A.
1993-01-01
Multispectral sensor systems have become steadily improved over the years in their ability to deliver increased spectral detail. With the advent of hyperspectral sensors, including imaging spectrometers, this technology is in the process of taking a large leap forward, thus providing the possibility of enabling delivery of much more detailed information. However, this direction of development has drawn even more attention to the matter of noise and other deleterious effects in the data, because reducing the fundamental limitations of spectral detail on information collection raises the limitations presented by noise to even greater importance. Much current effort in remote sensing research is thus being devoted to adjusting the data to mitigate the effects of noise and other deleterious effects. A parallel approach to the problem is to look for analysis approaches and procedures which have reduced sensitivity to such effects. We discuss some of the fundamental principles which define analysis algorithm characteristics providing such reduced sensitivity. One such analysis procedure including an example analysis of a data set is described, illustrating this effect.
Wavelet Filter Banks for Super-Resolution SAR Imaging
NASA Technical Reports Server (NTRS)
Sheybani, Ehsan O.; Deshpande, Manohar; Memarsadeghi, Nargess
2011-01-01
This paper discusses Innovative wavelet-based filter banks designed to enhance the analysis of super resolution Synthetic Aperture Radar (SAR) images using parametric spectral methods and signal classification algorithms, SAR finds applications In many of NASA's earth science fields such as deformation, ecosystem structure, and dynamics of Ice, snow and cold land processes, and surface water and ocean topography. Traditionally, standard methods such as Fast-Fourier Transform (FFT) and Inverse Fast-Fourier Transform (IFFT) have been used to extract Images from SAR radar data, Due to non-parametric features of these methods and their resolution limitations and observation time dependence, use of spectral estimation and signal pre- and post-processing techniques based on wavelets to process SAR radar data has been proposed. Multi-resolution wavelet transforms and advanced spectral estimation techniques have proven to offer efficient solutions to this problem.
Spectral analysis of bacanora (agave-derived liquor) by using FT-Raman spectroscopy
NASA Astrophysics Data System (ADS)
Ortega Clavero, Valentin; Weber, Andreas; Schröder, Werner; Curticapean, Dan
2016-04-01
The industry of the agave-derived bacanora, in the northern Mexican state of Sonora, has been growing substantially in recent years. However, this higher demand still lies under the influences of a variety of social, legal, cultural, ecological and economic elements. The governmental institutions of the state have tried to encourage a sustainable development and certain levels of standardization in the production of bacanora by applying different economical and legal strategies. However, a large portion of this alcoholic beverage is still produced in a traditional and rudimentary fashion. Beyond the quality of the beverage, the lack of proper control, by using adequate instrumental methods, might represent a health risk, as in several cases traditional-distilled beverages can contain elevated levels of harmful materials. The present article describes the qualitative spectral analysis of samples of the traditional-produced distilled beverage bacanora in the range from 0 cm-1 to 3500 cm-1 by using a Fourier Transform Raman spectrometer. This particular technique has not been previously explored for the analysis of bacanora, as in the case of other beverages, including tequila. The proposed instrumental arrangement for the spectral analysis has been built by combining conventional hardware parts (Michelson interferometer, photo-diodes, visible laser, etc.) and a set of self-developed evaluation algorithms. The resulting spectral information has been compared to those of pure samples of ethanol and to the spectra from different samples of the alcoholic beverage tequila. The proposed instrumental arrangement can be used the analysis of bacanora.
Detection of nasopharyngeal cancer using confocal Raman spectroscopy and genetic algorithm technique
NASA Astrophysics Data System (ADS)
Li, Shao-Xin; Chen, Qiu-Yan; Zhang, Yan-Jiao; Liu, Zhi-Ming; Xiong, Hong-Lian; Guo, Zhou-Yi; Mai, Hai-Qiang; Liu, Song-Hao
2012-12-01
Raman spectroscopy (RS) and a genetic algorithm (GA) were applied to distinguish nasopharyngeal cancer (NPC) from normal nasopharyngeal tissue. A total of 225 Raman spectra are acquired from 120 tissue sites of 63 nasopharyngeal patients, 56 Raman spectra from normal tissue and 169 Raman spectra from NPC tissue. The GA integrated with linear discriminant analysis (LDA) is developed to differentiate NPC and normal tissue according to spectral variables in the selected regions of 792-805, 867-880, 996-1009, 1086-1099, 1288-1304, 1663-1670, and 1742-1752 cm-1 related to proteins, nucleic acids and lipids of tissue. The GA-LDA algorithms with the leave-one-out cross-validation method provide a sensitivity of 69.2% and specificity of 100%. The results are better than that of principal component analysis which is applied to the same Raman dataset of nasopharyngeal tissue with a sensitivity of 63.3% and specificity of 94.6%. This demonstrates that Raman spectroscopy associated with GA-LDA diagnostic algorithm has enormous potential to detect and diagnose nasopharyngeal cancer.
Arctic lead detection using a waveform mixture algorithm from CryoSat-2 data
NASA Astrophysics Data System (ADS)
Lee, Sanggyun; Kim, Hyun-cheol; Im, Jungho
2018-05-01
We propose a waveform mixture algorithm to detect leads from CryoSat-2 data, which is novel and different from the existing threshold-based lead detection methods. The waveform mixture algorithm adopts the concept of spectral mixture analysis, which is widely used in the field of hyperspectral image analysis. This lead detection method was evaluated with high-resolution (250 m) MODIS images and showed comparable and promising performance in detecting leads when compared to the previous methods. The robustness of the proposed approach also lies in the fact that it does not require the rescaling of parameters (i.e., stack standard deviation, stack skewness, stack kurtosis, pulse peakiness, and backscatter σ0), as it directly uses L1B waveform data, unlike the existing threshold-based methods. Monthly lead fraction maps were produced by the waveform mixture algorithm, which shows interannual variability of recent sea ice cover during 2011-2016, excluding the summer season (i.e., June to September). We also compared the lead fraction maps to other lead fraction maps generated from previously published data sets, resulting in similar spatiotemporal patterns.
Diverse Power Iteration Embeddings and Its Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang H.; Yoo S.; Yu, D.
2014-12-14
Abstract—Spectral Embedding is one of the most effective dimension reduction algorithms in data mining. However, its computation complexity has to be mitigated in order to apply it for real-world large scale data analysis. Many researches have been focusing on developing approximate spectral embeddings which are more efficient, but meanwhile far less effective. This paper proposes Diverse Power Iteration Embeddings (DPIE), which not only retains the similar efficiency of power iteration methods but also produces a series of diverse and more effective embedding vectors. We test this novel method by applying it to various data mining applications (e.g. clustering, anomaly detectionmore » and feature selection) and evaluating their performance improvements. The experimental results show our proposed DPIE is more effective than popular spectral approximation methods, and obtains the similar quality of classic spectral embedding derived from eigen-decompositions. Moreover it is extremely fast on big data applications. For example in terms of clustering result, DPIE achieves as good as 95% of classic spectral clustering on the complex datasets but 4000+ times faster in limited memory environment.« less
Dasari, Surendra; Chambers, Matthew C.; Martinez, Misti A.; Carpenter, Kristin L.; Ham, Amy-Joan L.; Vega-Montoto, Lorenzo J.; Tabb, David L.
2012-01-01
Spectral libraries have emerged as a viable alternative to protein sequence databases for peptide identification. These libraries contain previously detected peptide sequences and their corresponding tandem mass spectra (MS/MS). Search engines can then identify peptides by comparing experimental MS/MS scans to those in the library. Many of these algorithms employ the dot product score for measuring the quality of a spectrum-spectrum match (SSM). This scoring system does not offer a clear statistical interpretation and ignores fragment ion m/z discrepancies in the scoring. We developed a new spectral library search engine, Pepitome, which employs statistical systems for scoring SSMs. Pepitome outperformed the leading library search tool, SpectraST, when analyzing data sets acquired on three different mass spectrometry platforms. We characterized the reliability of spectral library searches by confirming shotgun proteomics identifications through RNA-Seq data. Applying spectral library and database searches on the same sample revealed their complementary nature. Pepitome identifications enabled the automation of quality analysis and quality control (QA/QC) for shotgun proteomics data acquisition pipelines. PMID:22217208
Narayanaswamy, Arunachalam; Dwarakapuram, Saritha; Bjornsson, Christopher S; Cutler, Barbara M; Shain, William; Roysam, Badrinath
2010-03-01
This paper presents robust 3-D algorithms to segment vasculature that is imaged by labeling laminae, rather than the lumenal volume. The signal is weak, sparse, noisy, nonuniform, low-contrast, and exhibits gaps and spectral artifacts, so adaptive thresholding and Hessian filtering based methods are not effective. The structure deviates from a tubular geometry, so tracing algorithms are not effective. We propose a four step approach. The first step detects candidate voxels using a robust hypothesis test based on a model that assumes Poisson noise and locally planar geometry. The second step performs an adaptive region growth to extract weakly labeled and fine vessels while rejecting spectral artifacts. To enable interactive visualization and estimation of features such as statistical confidence, local curvature, local thickness, and local normal, we perform the third step. In the third step, we construct an accurate mesh representation using marching tetrahedra, volume-preserving smoothing, and adaptive decimation algorithms. To enable topological analysis and efficient validation, we describe a method to estimate vessel centerlines using a ray casting and vote accumulation algorithm which forms the final step of our algorithm. Our algorithm lends itself to parallel processing, and yielded an 8 x speedup on a graphics processor (GPU). On synthetic data, our meshes had average error per face (EPF) values of (0.1-1.6) voxels per mesh face for peak signal-to-noise ratios from (110-28 dB). Separately, the error from decimating the mesh to less than 1% of its original size, the EPF was less than 1 voxel/face. When validated on real datasets, the average recall and precision values were found to be 94.66% and 94.84%, respectively.
Real-time implementation of a multispectral mine target detection algorithm
NASA Astrophysics Data System (ADS)
Samson, Joseph W.; Witter, Lester J.; Kenton, Arthur C.; Holloway, John H., Jr.
2003-09-01
Spatial-spectral anomaly detection (the "RX Algorithm") has been exploited on the USMC's Coastal Battlefield Reconnaissance and Analysis (COBRA) Advanced Technology Demonstration (ATD) and several associated technology base studies, and has been found to be a useful method for the automated detection of surface-emplaced antitank land mines in airborne multispectral imagery. RX is a complex image processing algorithm that involves the direct spatial convolution of a target/background mask template over each multispectral image, coupled with a spatially variant background spectral covariance matrix estimation and inversion. The RX throughput on the ATD was about 38X real time using a single Sun UltraSparc system. A goal to demonstrate RX in real-time was begun in FY01. We now report the development and demonstration of a Field Programmable Gate Array (FPGA) solution that achieves a real-time implementation of the RX algorithm at video rates using COBRA ATD data. The approach uses an Annapolis Microsystems Firebird PMC card containing a Xilinx XCV2000E FPGA with over 2,500,000 logic gates and 18MBytes of memory. A prototype system was configured using a Tek Microsystems VME board with dual-PowerPC G4 processors and two PMC slots. The RX algorithm was translated from its C programming implementation into the VHDL language and synthesized into gates that were loaded into the FPGA. The VHDL/synthesizer approach allows key RX parameters to be quickly changed and a new implementation automatically generated. Reprogramming the FPGA is done rapidly and in-circuit. Implementation of the RX algorithm in a single FPGA is a major first step toward achieving real-time land mine detection.
Land Surface Temperature Measurements form EOS MODIS Data
NASA Technical Reports Server (NTRS)
Wan, Zhengming
1996-01-01
We have developed a physics-based land-surface temperature (LST) algorithm for simultaneously retrieving surface band-averaged emissivities and temperatures from day/night pairs of MODIS (Moderate Resolution Imaging Spectroradiometer) data in seven thermal infrared bands. The set of 14 nonlinear equations in the algorithm is solved with the statistical regression method and the least-squares fit method. This new LST algorithm was tested with simulated MODIS data for 80 sets of band-averaged emissivities calculated from published spectral data of terrestrial materials in wide ranges of atmospheric and surface temperature conditions. Comprehensive sensitivity and error analysis has been made to evaluate the performance of the new LST algorithm and its dependence on variations in surface emissivity and temperature, upon atmospheric conditions, as well as the noise-equivalent temperature difference (NE(Delta)T) and calibration accuracy specifications of the MODIS instrument. In cases with a systematic calibration error of 0.5%, the standard deviations of errors in retrieved surface daytime and nighttime temperatures fall between 0.4-0.5 K over a wide range of surface temperatures for mid-latitude summer conditions. The standard deviations of errors in retrieved emissivities in bands 31 and 32 (in the 10-12.5 micrometer IR spectral window region) are 0.009, and the maximum error in retrieved LST values falls between 2-3 K. Several issues related to the day/night LST algorithm (uncertainties in the day/night registration and in surface emissivity changes caused by dew occurrence, and the cloud cover) have been investigated. The LST algorithms have been validated with MODIS Airborne Simulator (MAS) dada and ground-based measurement data in two field campaigns conducted in Railroad Valley playa, NV in 1995 and 1996. The MODIS LST version 1 software has been delivered.
NASA Technical Reports Server (NTRS)
Limbacher, James A.; Kahn, Ralph A.
2017-01-01
As aerosol amount and type are key factors in the 'atmospheric correction' required for remote-sensing chlorophyll alpha concentration (Chl) retrievals, the Multi-angle Imaging SpectroRadiometer (MISR) can contribute to ocean color analysis despite a lack of spectral channels optimized for this application. Conversely, an improved ocean surface constraint should also improve MISR aerosol-type products, especially spectral single-scattering albedo (SSA) retrievals. We introduce a coupled, self-consistent retrieval of Chl together with aerosol over dark water. There are time-varying MISR radiometric calibration errors that significantly affect key spectral reflectance ratios used in the retrievals. Therefore, we also develop and apply new calibration corrections to the MISR top-of-atmosphere (TOA) reflectance data, based on comparisons with coincident MODIS (Moderate Resolution Imaging Spectroradiometer) observations and trend analysis of the MISR TOA bidirectional reflectance factors (BRFs) over three pseudo-invariant desert sites. We run the MISR research retrieval algorithm (RA) with the corrected MISR reflectances to generate MISR-retrieved Chl and compare the MISR Chl values to a set of 49 coincident SeaBASS (SeaWiFS Bio-optical Archive and Storage System) in situ observations. Where Chl(sub in situ) less than 1.5 mg m(exp -3), the results from our Chl model are expected to be of highest quality, due to algorithmic assumption validity. Comparing MISR RA Chl to the 49 coincident SeaBASS observations, we report a correlation coefficient (r) of 0.86, a root-mean-square error (RMSE) of 0.25, and a median absolute error (MAE) of 0.10. Statistically, a two-sample Kolmogorov- Smirnov test indicates that it is not possible to distinguish between MISR Chl and available SeaBASS in situ Chl values (p greater than 0.1). We also compare MODIS-Terra and MISR RA Chl statistically, over much broader regions. With about 1.5 million MISR-MODIS collocations having MODIS Chl less than 1.5 mg m(exp -3), MISR and MODIS show very good agreement: r = 0.96, MAE = 0.09, and RMSE = 0.15. The new dark water aerosol/Chl RA can retrieve Chl in low-Chl, case I waters, independent of other imagers such as MODIS, via a largely physical algorithm, compared to the commonly applied statistical ones. At a minimum, MISR's multi-angle data should help reduce uncertainties in the MODIS-Terra ocean color retrieval where coincident measurements are made, while also allowing for a more robust retrieval of particle properties such as spectral single-scattering albedo.
NASA Astrophysics Data System (ADS)
Limbacher, James A.; Kahn, Ralph A.
2017-04-01
As aerosol amount and type are key factors in the atmospheric correction
required for remote-sensing chlorophyll a concentration (Chl) retrievals, the Multi-angle Imaging SpectroRadiometer (MISR) can contribute to ocean color analysis despite a lack of spectral channels optimized for this application. Conversely, an improved ocean surface constraint should also improve MISR aerosol-type products, especially spectral single-scattering albedo (SSA) retrievals. We introduce a coupled, self-consistent retrieval of Chl together with aerosol over dark water. There are time-varying MISR radiometric calibration errors that significantly affect key spectral reflectance ratios used in the retrievals. Therefore, we also develop and apply new calibration corrections to the MISR top-of-atmosphere (TOA) reflectance data, based on comparisons with coincident MODIS (Moderate Resolution Imaging Spectroradiometer) observations and trend analysis of the MISR TOA bidirectional reflectance factors (BRFs) over three pseudo-invariant desert sites. We run the MISR research retrieval algorithm (RA) with the corrected MISR reflectances to generate MISR-retrieved Chl and compare the MISR Chl values to a set of 49 coincident SeaBASS (SeaWiFS Bio-optical Archive and Storage System) in situ observations. Where Chlin situ < 1.5 mg m-3, the results from our Chl model are expected to be of highest quality, due to algorithmic assumption validity. Comparing MISR RA Chl to the 49 coincident SeaBASS observations, we report a correlation coefficient (r) of 0.86, a root-mean-square error (RMSE) of 0.25, and a median absolute error (MAE) of 0.10. Statistically, a two-sample Kolmogorov-Smirnov test indicates that it is not possible to distinguish between MISR Chl and available SeaBASS in situ Chl values (p > 0.1). We also compare MODIS-Terra and MISR RA Chl statistically, over much broader regions. With about 1.5 million MISR-MODIS collocations having MODIS Chl < 1.5 mg m-3, MISR and MODIS show very good agreement: r = 0. 96, MAE = 0.09, and RMSE = 0.15. The new dark water aerosol/Chl RA can retrieve Chl in low-Chl, case I waters, independent of other imagers such as MODIS, via a largely physical algorithm, compared to the commonly applied statistical ones. At a minimum, MISR's multi-angle data should help reduce uncertainties in the MODIS-Terra ocean color retrieval where coincident measurements are made, while also allowing for a more robust retrieval of particle properties such as spectral single-scattering albedo.
Brückner, Michael; Becker, Katja; Popp, Jürgen; Frosch, Torsten
2015-09-24
A new setup for Raman spectroscopic wide-field imaging is presented. It combines the advantages of a fiber array based spectral translator with a tailor-made laser illumination system for high-quality Raman chemical imaging of sensitive biological samples. The Gaussian-like intensity distribution of the illuminating laser beam is shaped by a square-core optical multimode fiber to a top-hat profile with very homogeneous intensity distribution to fulfill the conditions of Koehler. The 30 m long optical fiber and an additional vibrator efficiently destroy the polarization and coherence of the illuminating light. This homogeneous, incoherent illumination is an essential prerequisite for stable quantitative imaging of complex biological samples. The fiber array translates the two-dimensional lateral information of the Raman stray light into separated spectral channels with very high contrast. The Raman image can be correlated with a corresponding white light microscopic image of the sample. The new setup enables simultaneous quantification of all Raman spectra across the whole spatial area with very good spectral resolution and thus outperforms other Raman imaging approaches based on scanning and tunable filters. The unique capabilities of the setup for fast, gentle, sensitive, and selective chemical imaging of biological samples were applied for automated hemozoin analysis. A special algorithm was developed to generate Raman images based on the hemozoin distribution in red blood cells without any influence from other Raman scattering. The new imaging setup in combination with the robust algorithm provides a novel, elegant way for chemical selective analysis of the malaria pigment hemozoin in early ring stages of Plasmodium falciparum infected erythrocytes. Copyright © 2015 Elsevier B.V. All rights reserved.
Resolution Study of a Hyperspectral Sensor using Computed Tomography in the Presence of Noise
2012-06-14
diffraction efficiency is dependent on wavelength. Compared to techniques developed by later work, simple algebraic reconstruction techniques were used...spectral di- mension, using computed tomography (CT) techniques with only a finite number of diverse images. CTHIS require a reconstruction algorithm in...many frames are needed to reconstruct the spectral cube of a simple object using a theoretical lower bound. In this research a new algorithm is derived
Two-dimensional imaging of gas temperature and concentration based on hyperspectral tomography
NASA Astrophysics Data System (ADS)
Xin, Ming-yuan; Jin, Xing; Wang, Guang-yu; Song, Junling
2016-10-01
Two-dimensional imaging of gas temperature and concentration is realized by hyperspectral tomography, which has the characteristics of using multi-wavelengths absorption spectral information, so that the imaging could be accomplished in a small number of projections and viewing angles. A temperature and concentration model is established to simulate the combustion conditions and a total number of 10 near-infrared absorption spectral information of H2O is used. An improved simulated annealing algorithm by adjusting search step is performed the main search algorithm for the tomography. By adding random errors into the absorption area information, the stability of the algorithm is tested, and the results are compared with the reconstructions provided by algebraic reconstruction technique which takes advantage of 2 spectral information contents in imaging. The results show that the two methods perform equivalent in low-level noise environment, but at high-level, hyperspectral tomography turns out to be more stable.
Hyperspectral remote sensing image retrieval system using spectral and texture features.
Zhang, Jing; Geng, Wenhao; Liang, Xi; Li, Jiafeng; Zhuo, Li; Zhou, Qianlan
2017-06-01
Although many content-based image retrieval systems have been developed, few studies have focused on hyperspectral remote sensing images. In this paper, a hyperspectral remote sensing image retrieval system based on spectral and texture features is proposed. The main contributions are fourfold: (1) considering the "mixed pixel" in the hyperspectral image, endmembers as spectral features are extracted by an improved automatic pixel purity index algorithm, then the texture features are extracted with the gray level co-occurrence matrix; (2) similarity measurement is designed for the hyperspectral remote sensing image retrieval system, in which the similarity of spectral features is measured with the spectral information divergence and spectral angle match mixed measurement and in which the similarity of textural features is measured with Euclidean distance; (3) considering the limited ability of the human visual system, the retrieval results are returned after synthesizing true color images based on the hyperspectral image characteristics; (4) the retrieval results are optimized by adjusting the feature weights of similarity measurements according to the user's relevance feedback. The experimental results on NASA data sets can show that our system can achieve comparable superior retrieval performance to existing hyperspectral analysis schemes.
NASA Astrophysics Data System (ADS)
Huo, Yanfeng; Duan, Minzheng; Tian, Wenshou; Min, Qilong
2015-08-01
A differential optical absorption spectroscopy (DOAS)-like algorithm is developed to retrieve the column-averaged dryair mole fraction of carbon dioxide from ground-based hyper-spectral measurements of the direct solar beam. Different to the spectral fitting method, which minimizes the difference between the observed and simulated spectra, the ratios of multiple channel-pairs—one weak and one strong absorption channel—are used to retrieve from measurements of the shortwave infrared (SWIR) band. Based on sensitivity tests, a super channel-pair is carefully selected to reduce the effects of solar lines, water vapor, air temperature, pressure, instrument noise, and frequency shift on retrieval errors. The new algorithm reduces computational cost and the retrievals are less sensitive to temperature and H2O uncertainty than the spectral fitting method. Multi-day Total Carbon Column Observing Network (TCCON) measurements under clear-sky conditions at two sites (Tsukuba and Bremen) are used to derive xxxx for the algorithm evaluation and validation. The DOAS-like results agree very well with those of the TCCON algorithm after correction of an airmass-dependent bias.
NASA Astrophysics Data System (ADS)
Yang, Dong; Lu, Anxiang; Ren, Dong; Wang, Jihua
2017-11-01
This study explored the feasibility of rapid detection of biogenic amines (BAs) in cooked beef during the storage process using hyperspectral imaging technique combined with sparse representation (SR) algorithm. The hyperspectral images of samples were collected in the two spectral ranges of 400-1000 nm and 1000-1800 nm, separately. The spectral data were reduced dimensionality by SR and principal component analysis (PCA) algorithms, and then integrated the least square support vector machine (LS-SVM) to build the SR-LS-SVM and PC-LS-SVM models for the prediction of BAs values in cooked beef. The results showed that the SR-LS-SVM model exhibited the best predictive ability with determination coefficients (RP2) of 0.943 and root mean square errors (RMSEP) of 1.206 in the range of 400-1000 nm of prediction set. The SR and PCA algorithms were further combined to establish the best SR-PC-LS-SVM model for BAs prediction, which had high RP2of 0.969 and low RMSEP of 1.039 in the region of 400-1000 nm. The visual map of the BAs was generated using the best SR-PC-LS-SVM model with imaging process algorithms, which could be used to observe the changes of BAs in cooked beef more intuitively. The study demonstrated that hyperspectral imaging technique combined with sparse representation were able to detect effectively the BAs values in cooked beef during storage and the built SR-PC-LS-SVM model had a potential for rapid and accurate determination of freshness indexes in other meat and meat products.
Fast algorithm for bilinear transforms in optics
NASA Astrophysics Data System (ADS)
Ostrovsky, Andrey S.; Martinez-Niconoff, Gabriel C.; Ramos Romero, Obdulio; Cortes, Liliana
2000-10-01
The fast algorithm for calculating the bilinear transform in the optical system is proposed. This algorithm is based on the coherent-mode representation of the cross-spectral density function of the illumination. The algorithm is computationally efficient when the illumination is partially coherent. Numerical examples are studied and compared with the theoretical results.
NASA Astrophysics Data System (ADS)
Hidalgo-Aguirre, Maribel; Gitelman, Julian; Lesk, Mark Richard; Costantino, Santiago
2015-11-01
Optical coherence tomography (OCT) imaging has become a standard diagnostic tool in ophthalmology, providing essential information associated with various eye diseases. In order to investigate the dynamics of the ocular fundus, we present a simple and accurate automated algorithm to segment the inner limiting membrane in video-rate optic nerve head spectral domain (SD) OCT images. The method is based on morphological operations including a two-step contrast enhancement technique, proving to be very robust when dealing with low signal-to-noise ratio images and pathological eyes. An analysis algorithm was also developed to measure neuroretinal tissue deformation from the segmented retinal profiles. The performance of the algorithm is demonstrated, and deformation results are presented for healthy and glaucomatous eyes.
ECG contamination of EEG signals: effect on entropy.
Chakrabarti, Dhritiman; Bansal, Sonia
2016-02-01
Entropy™ is a proprietary algorithm which uses spectral entropy analysis of electroencephalographic (EEG) signals to produce indices which are used as a measure of depth of hypnosis. We describe a report of electrocardiographic (ECG) contamination of EEG signals leading to fluctuating erroneous Entropy values. An explanation is provided for mechanism behind this observation by describing the spread of ECG signals in head and neck and its influence on EEG/Entropy by correlating the observation with the published Entropy algorithm. While the Entropy algorithm has been well conceived, there are still instances in which it can produce erroneous values. Such erroneous values and their cause may be identified by close scrutiny of the EEG waveform if Entropy values seem out of sync with that expected at given anaesthetic levels.
Duraipandian, Shiyamala; Sylvest Bergholt, Mads; Zheng, Wei; Yu Ho, Khek; Teh, Ming; Guan Yeoh, Khay; Bok Yan So, Jimmy; Shabbir, Asim; Huang, Zhiwei
2012-08-01
Optical spectroscopic techniques including reflectance, fluorescence and Raman spectroscopy have shown promising potential for in vivo precancer and cancer diagnostics in a variety of organs. However, data-analysis has mostly been limited to post-processing and off-line algorithm development. In this work, we develop a fully automated on-line Raman spectral diagnostics framework integrated with a multimodal image-guided Raman technique for real-time in vivo cancer detection at endoscopy. A total of 2748 in vivo gastric tissue spectra (2465 normal and 283 cancer) were acquired from 305 patients recruited to construct a spectral database for diagnostic algorithms development. The novel diagnostic scheme developed implements on-line preprocessing, outlier detection based on principal component analysis statistics (i.e., Hotelling's T2 and Q-residuals) for tissue Raman spectra verification as well as for organ specific probabilistic diagnostics using different diagnostic algorithms. Free-running optical diagnosis and processing time of < 0.5 s can be achieved, which is critical to realizing real-time in vivo tissue diagnostics during clinical endoscopic examination. The optimized partial least squares-discriminant analysis (PLS-DA) models based on the randomly resampled training database (80% for learning and 20% for testing) provide the diagnostic accuracy of 85.6% [95% confidence interval (CI): 82.9% to 88.2%] [sensitivity of 80.5% (95% CI: 71.4% to 89.6%) and specificity of 86.2% (95% CI: 83.6% to 88.7%)] for the detection of gastric cancer. The PLS-DA algorithms are further applied prospectively on 10 gastric patients at gastroscopy, achieving the predictive accuracy of 80.0% (60/75) [sensitivity of 90.0% (27/30) and specificity of 73.3% (33/45)] for in vivo diagnosis of gastric cancer. The receiver operating characteristics curves further confirmed the efficacy of Raman endoscopy together with PLS-DA algorithms for in vivo prospective diagnosis of gastric cancer. This work successfully moves biomedical Raman spectroscopic technique into real-time, on-line clinical cancer diagnosis, especially in routine endoscopic diagnostic applications.
NASA Astrophysics Data System (ADS)
Duraipandian, Shiyamala; Sylvest Bergholt, Mads; Zheng, Wei; Yu Ho, Khek; Teh, Ming; Guan Yeoh, Khay; Bok Yan So, Jimmy; Shabbir, Asim; Huang, Zhiwei
2012-08-01
Optical spectroscopic techniques including reflectance, fluorescence and Raman spectroscopy have shown promising potential for in vivo precancer and cancer diagnostics in a variety of organs. However, data-analysis has mostly been limited to post-processing and off-line algorithm development. In this work, we develop a fully automated on-line Raman spectral diagnostics framework integrated with a multimodal image-guided Raman technique for real-time in vivo cancer detection at endoscopy. A total of 2748 in vivo gastric tissue spectra (2465 normal and 283 cancer) were acquired from 305 patients recruited to construct a spectral database for diagnostic algorithms development. The novel diagnostic scheme developed implements on-line preprocessing, outlier detection based on principal component analysis statistics (i.e., Hotelling's T2 and Q-residuals) for tissue Raman spectra verification as well as for organ specific probabilistic diagnostics using different diagnostic algorithms. Free-running optical diagnosis and processing time of < 0.5 s can be achieved, which is critical to realizing real-time in vivo tissue diagnostics during clinical endoscopic examination. The optimized partial least squares-discriminant analysis (PLS-DA) models based on the randomly resampled training database (80% for learning and 20% for testing) provide the diagnostic accuracy of 85.6% [95% confidence interval (CI): 82.9% to 88.2%] [sensitivity of 80.5% (95% CI: 71.4% to 89.6%) and specificity of 86.2% (95% CI: 83.6% to 88.7%)] for the detection of gastric cancer. The PLS-DA algorithms are further applied prospectively on 10 gastric patients at gastroscopy, achieving the predictive accuracy of 80.0% (60/75) [sensitivity of 90.0% (27/30) and specificity of 73.3% (33/45)] for in vivo diagnosis of gastric cancer. The receiver operating characteristics curves further confirmed the efficacy of Raman endoscopy together with PLS-DA algorithms for in vivo prospective diagnosis of gastric cancer. This work successfully moves biomedical Raman spectroscopic technique into real-time, on-line clinical cancer diagnosis, especially in routine endoscopic diagnostic applications.
Numerical Investigation of Laminar-Turbulent Transition in a Flat Plate Wake
1990-03-02
Difference Methods , Oxford University Press. 3 Swarztrauber, P. N. (1977). "The Methods of Cyclic Reduction, Fourier Analysis and The FACR Algorithm for...streamwise and trans- verse directions. For the temporal discretion, a combination of ADI, Crank-Nicolson,Iand Adams-Rashforth methods is employed. The...41 U 5. NUMERICAL METHOD ...... .................... .. 50 3 5.1 Spanwise Spectral Approximation ... .............. ... 50 5.1.1 Fourier
Johansen, Richard; Beck, Richard; Nowosad, Jakub; Nietch, Christopher; Xu, Min; Shu, Song; Yang, Bo; Liu, Hongxing; Emery, Erich; Reif, Molly; Harwood, Joseph; Young, Jade; Macke, Dana; Martin, Mark; Stillings, Garrett; Stumpf, Richard; Su, Haibin
2018-06-01
This study evaluated the performances of twenty-nine algorithms that use satellite-based spectral imager data to derive estimates of chlorophyll-a concentrations that, in turn, can be used as an indicator of the general status of algal cell densities and the potential for a harmful algal bloom (HAB). The performance assessment was based on making relative comparisons between two temperate inland lakes: Harsha Lake (7.99 km 2 ) in Southwest Ohio and Taylorsville Lake (11.88 km 2 ) in central Kentucky. Of interest was identifying algorithm-imager combinations that had high correlation with coincident chlorophyll-a surface observations for both lakes, as this suggests portability for regional HAB monitoring. The spectral data utilized to estimate surface water chlorophyll-a concentrations were derived from the airborne Compact Airborne Spectral Imager (CASI) 1500 hyperspectral imager, that was then used to derive synthetic versions of currently operational satellite-based imagers using spatial resampling and spectral binning. The synthetic data mimics the configurations of spectral imagers on current satellites in earth's orbit including, WorldView-2/3, Sentinel-2, Landsat-8, Moderate-resolution Imaging Spectroradiometer (MODIS), and Medium Resolution Imaging Spectrometer (MERIS). High correlations were found between the direct measurement and the imagery-estimated chlorophyll-a concentrations at both lakes. The results determined that eleven out of the twenty-nine algorithms were considered portable, with r 2 values greater than 0.5 for both lakes. Even though the two lakes are different in terms of background water quality, size and shape, with Taylorsville being generally less impaired, larger, but much narrower throughout, the results support the portability of utilizing a suite of certain algorithms across multiple sensors to detect potential algal blooms through the use of chlorophyll-a as a proxy. Furthermore, the strong performance of the Sentinel-2 algorithms is exceptionally promising, due to the recent launch of the second satellite in the constellation, which will provide higher temporal resolution for temperate inland water bodies. Additionally, scripts were written for the open-source statistical software R that automate much of the spectral data processing steps. This allows for the simultaneous consideration of numerous algorithms across multiple imagers over an expedited time frame for the near real-time monitoring required for detecting algal blooms and mitigating their adverse impacts. Copyright © 2018 Elsevier B.V. All rights reserved.
Fast sparse Raman spectral unmixing for chemical fingerprinting and quantification
NASA Astrophysics Data System (ADS)
Yaghoobi, Mehrdad; Wu, Di; Clewes, Rhea J.; Davies, Mike E.
2016-10-01
Raman spectroscopy is a well-established spectroscopic method for the detection of condensed phase chemicals. It is based on scattered light from exposure of a target material to a narrowband laser beam. The information generated enables presumptive identification from measuring correlation with library spectra. Whilst this approach is successful in identification of chemical information of samples with one component, it is more difficult to apply to spectral mixtures. The capability of handling spectral mixtures is crucial for defence and security applications as hazardous materials may be present as mixtures due to the presence of degradation, interferents or precursors. A novel method for spectral unmixing is proposed here. Most modern decomposition techniques are based on the sparse decomposition of mixture and the application of extra constraints to preserve the sum of concentrations. These methods have often been proposed for passive spectroscopy, where spectral baseline correction is not required. Most successful methods are computationally expensive, e.g. convex optimisation and Bayesian approaches. We present a novel low complexity sparsity based method to decompose the spectra using a reference library of spectra. It can be implemented on a hand-held spectrometer in near to real-time. The algorithm is based on iteratively subtracting the contribution of selected spectra and updating the contribution of each spectrum. The core algorithm is called fast non-negative orthogonal matching pursuit, which has been proposed by the authors in the context of nonnegative sparse representations. The iteration terminates when the maximum number of expected chemicals has been found or the residual spectrum has a negligible energy, i.e. in the order of the noise level. A backtracking step removes the least contributing spectrum from the list of detected chemicals and reports it as an alternative component. This feature is particularly useful in detection of chemicals with small contributions, which are normally not detected. The proposed algorithm is easily reconfigurable to include new library entries and optional preferential threat searches in the presence of predetermined threat indicators. Under Ministry of Defence funding, we have demonstrated the algorithm for fingerprinting and rough quantification of the concentration of chemical mixtures using a set of reference spectral mixtures. In our experiments, the algorithm successfully managed to detect the chemicals with concentrations below 10 percent. The running time of the algorithm is in the order of one second, using a single core of a desktop computer.
A New Algorithm for Detecting Cloud Height using OMPS/LP Measurements
NASA Technical Reports Server (NTRS)
Chen, Zhong; DeLand, Matthew; Bhartia, Pawan K.
2016-01-01
The Ozone Mapping and Profiler Suite Limb Profiler (OMPS/LP) ozone product requires the determination of cloud height for each event to establish the lower boundary of the profile for the retrieval algorithm. We have created a revised cloud detection algorithm for LP measurements that uses the spectral dependence of the vertical gradient in radiance between two wavelengths in the visible and near-IR spectral regions. This approach provides better discrimination between clouds and aerosols than results obtained using a single wavelength. Observed LP cloud height values show good agreement with coincident Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) measurements.
Predictive Rate-Distortion for Infinite-Order Markov Processes
NASA Astrophysics Data System (ADS)
Marzen, Sarah E.; Crutchfield, James P.
2016-06-01
Predictive rate-distortion analysis suffers from the curse of dimensionality: clustering arbitrarily long pasts to retain information about arbitrarily long futures requires resources that typically grow exponentially with length. The challenge is compounded for infinite-order Markov processes, since conditioning on finite sequences cannot capture all of their past dependencies. Spectral arguments confirm a popular intuition: algorithms that cluster finite-length sequences fail dramatically when the underlying process has long-range temporal correlations and can fail even for processes generated by finite-memory hidden Markov models. We circumvent the curse of dimensionality in rate-distortion analysis of finite- and infinite-order processes by casting predictive rate-distortion objective functions in terms of the forward- and reverse-time causal states of computational mechanics. Examples demonstrate that the resulting algorithms yield substantial improvements.
Ahn, M. H.; Han, D.; Won, H. Y.; ...
2015-02-03
For better utilization of the ground-based microwave radiometer, it is important to detect the cloud presence in the measured data. Here, we introduce a simple and fast cloud detection algorithm by using the optical characteristics of the clouds in the infrared atmospheric window region. The new algorithm utilizes the brightness temperature (Tb) measured by an infrared radiometer installed on top of a microwave radiometer. The two-step algorithm consists of a spectral test followed by a temporal test. The measured Tb is first compared with a predicted clear-sky Tb obtained by an empirical formula as a function of surface air temperaturemore » and water vapor pressure. For the temporal test, the temporal variability of the measured Tb during one minute compares with a dynamic threshold value, representing the variability of clear-sky conditions. It is designated as cloud-free data only when both the spectral and temporal tests confirm cloud-free data. Overall, most of the thick and uniform clouds are successfully detected by the spectral test, while the broken and fast-varying clouds are detected by the temporal test. The algorithm is validated by comparison with the collocated ceilometer data for six months, from January to June 2013. The overall proportion of correctness is about 88.3% and the probability of detection is 90.8%, which are comparable with or better than those of previous similar approaches. Two thirds of discrepancies occur when the new algorithm detects clouds while the ceilometer does not, resulting in different values of the probability of detection with different cloud-base altitude, 93.8, 90.3, and 82.8% for low, mid, and high clouds, respectively. Finally, due to the characteristics of the spectral range, the new algorithm is found to be insensitive to the presence of inversion layers.« less
Rangan, Aaditya V; McGrouther, Caroline C; Kelsoe, John; Schork, Nicholas; Stahl, Eli; Zhu, Qian; Krishnan, Arjun; Yao, Vicky; Troyanskaya, Olga; Bilaloglu, Seda; Raghavan, Preeti; Bergen, Sarah; Jureus, Anders; Landen, Mikael
2018-05-14
A common goal in data-analysis is to sift through a large data-matrix and detect any significant submatrices (i.e., biclusters) that have a low numerical rank. We present a simple algorithm for tackling this biclustering problem. Our algorithm accumulates information about 2-by-2 submatrices (i.e., 'loops') within the data-matrix, and focuses on rows and columns of the data-matrix that participate in an abundance of low-rank loops. We demonstrate, through analysis and numerical-experiments, that this loop-counting method performs well in a variety of scenarios, outperforming simple spectral methods in many situations of interest. Another important feature of our method is that it can easily be modified to account for aspects of experimental design which commonly arise in practice. For example, our algorithm can be modified to correct for controls, categorical- and continuous-covariates, as well as sparsity within the data. We demonstrate these practical features with two examples; the first drawn from gene-expression analysis and the second drawn from a much larger genome-wide-association-study (GWAS).
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Geogdzhayev, Igor V.; Cairns, Brian; Rossow, William B.; Lacis, Andrew A.
1999-01-01
This paper outlines the methodology of interpreting channel 1 and 2 AVHRR radiance data over the oceans and describes a detailed analysis of the sensitivity of monthly averages of retrieved aerosol parameters to the assumptions made in different retrieval algorithms. The analysis is based on using real AVHRR data and exploiting accurate numerical techniques for computing single and multiple scattering and spectral absorption of light in the vertically inhomogeneous atmosphere-ocean system. We show that two-channel algorithms can be expected to provide significantly more accurate and less biased retrievals of the aerosol optical thickness than one-channel algorithms and that imperfect cloud screening and calibration uncertainties are by far the largest sources of errors in the retrieved aerosol parameters. Both underestimating and overestimating aerosol absorption as well as the potentially strong variability of the real part of the aerosol refractive index may lead to regional and/or seasonal biases in optical thickness retrievals. The Angstrom exponent appears to be the most invariant aerosol size characteristic and should be retrieved along with optical thickness as the second aerosol parameter.
Terascale spectral element algorithms and implementations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischer, P. F.; Tufo, H. M.
1999-08-17
We describe the development and implementation of an efficient spectral element code for multimillion gridpoint simulations of incompressible flows in general two- and three-dimensional domains. We review basic and recently developed algorithmic underpinnings that have resulted in good parallel and vector performance on a broad range of architectures, including the terascale computing systems now coming online at the DOE labs. Sustained performance of 219 GFLOPS has been recently achieved on 2048 nodes of the Intel ASCI-Red machine at Sandia.
Image Reconstruction for Hybrid True-Color Micro-CT
Xu, Qiong; Yu, Hengyong; Bennett, James; He, Peng; Zainon, Rafidah; Doesburg, Robert; Opie, Alex; Walsh, Mike; Shen, Haiou; Butler, Anthony; Butler, Phillip; Mou, Xuanqin; Wang, Ge
2013-01-01
X-ray micro-CT is an important imaging tool for biomedical researchers. Our group has recently proposed a hybrid “true-color” micro-CT system to improve contrast resolution with lower system cost and radiation dose. The system incorporates an energy-resolved photon-counting true-color detector into a conventional micro-CT configuration, and can be used for material decomposition. In this paper, we demonstrate an interior color-CT image reconstruction algorithm developed for this hybrid true-color micro-CT system. A compressive sensing-based statistical interior tomography method is employed to reconstruct each channel in the local spectral imaging chain, where the reconstructed global gray-scale image from the conventional imaging chain served as the initial guess. Principal component analysis was used to map the spectral reconstructions into the color space. The proposed algorithm was evaluated by numerical simulations, physical phantom experiments, and animal studies. The results confirm the merits of the proposed algorithm, and demonstrate the feasibility of the hybrid true-color micro-CT system. Additionally, a “color diffusion” phenomenon was observed whereby high-quality true-color images are produced not only inside the region of interest, but also in neighboring regions. It appears harnessing that this phenomenon could potentially reduce the color detector size for a given ROI, further reducing system cost and radiation dose. PMID:22481806
Fast Detection of Material Deformation through Structural Dissimilarity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ushizima, Daniela; Perciano, Talita; Parkinson, Dilworth
2015-10-29
Designing materials that are resistant to extreme temperatures and brittleness relies on assessing structural dynamics of samples. Algorithms are critically important to characterize material deformation under stress conditions. Here, we report on our design of coarse-grain parallel algorithms for image quality assessment based on structural information and on crack detection of gigabyte-scale experimental datasets. We show how key steps can be decomposed into distinct processing flows, one based on structural similarity (SSIM) quality measure, and another on spectral content. These algorithms act upon image blocks that fit into memory, and can execute independently. We discuss the scientific relevance of themore » problem, key developments, and decomposition of complementary tasks into separate executions. We show how to apply SSIM to detect material degradation, and illustrate how this metric can be allied to spectral analysis for structure probing, while using tiled multi-resolution pyramids stored in HDF5 chunked multi-dimensional arrays. Results show that the proposed experimental data representation supports an average compression rate of 10X, and data compression scales linearly with the data size. We also illustrate how to correlate SSIM to crack formation, and how to use our numerical schemes to enable fast detection of deformation from 3D datasets evolving in time.« less
Detecting the red tide based on remote sensing data in optically complex East China Sea
NASA Astrophysics Data System (ADS)
Xu, Xiaohui; Pan, Delu; Mao, Zhihua; Tao, Bangyi; Liu, Qiong
2012-09-01
Red tide not only destroys marine fishery production, deteriorates the marine environment, affects coastal tourist industry, but also causes human poison, even death by eating toxic seafood contaminated by red tide organisms. Remote sensing technology has the characteristics of large-scale, synchronized, rapid monitoring, so it is one of the most important and most effective means of red tide monitoring. This paper selects the high frequency red tides areas of the East China Sea as study area, MODIS/Aqua L2 data as the data source, analysis and compares the spectral differences in the red tide water bodies and non-red tide water bodies of many historical events. Based on the spectral differences, this paper develops the algorithm of Rrs555/Rrs488> 1.5 to extract the red tide information. Apply the algorithm on red tide event happened in the East China Sea on May 28, 2009 to extract the information of red tide, and found that the method can determine effectively the location of the occurrence of red tide; there is a good corresponding relationship between red tide extraction result and chlorophyll a concentration extracted by remote sensing, shows that these algorithm can determine effectively the location and extract the red tide information.
NASA Technical Reports Server (NTRS)
Mannino, Antonio; Novak, Michael G.; Hooker, Stanford B.; Hyde, Kimberly; Aurin, Dick
2014-01-01
An extensive set of field measurements have been collected throughout the continental margin of the northeastern U.S. from 2004 to 2011 to develop and validate ocean color satellite algorithms for the retrieval of the absorption coefficient of chromophoric dissolved organic matter (aCDOM) and CDOM spectral slopes for the 275:295 nm and 300:600 nm spectral range (S275:295 and S300:600). Remote sensing reflectance (Rrs) measurements computed from in-water radiometry profiles along with aCDOM() data are applied to develop several types of algorithms for the SeaWiFS and MODIS-Aqua ocean color satellite sensors, which involve least squares linear regression of aCDOM() with (1) Rrs band ratios, (2) quasi-analytical algorithm-based (QAA based) products of total absorption coefficients, (3) multiple Rrs bands within a multiple linear regression (MLR) analysis, and (4) diffuse attenuation coefficient (Kd). The relative error (mean absolute percent difference; MAPD) for the MLR retrievals of aCDOM(275), aCDOM(355), aCDOM(380), aCDOM(412) and aCDOM(443) for our study region range from 20.4-23.9 for MODIS-Aqua and 27.3-30 for SeaWiFS. Because of the narrower range of CDOM spectral slope values, the MAPD for the MLR S275:295 and QAA-based S300:600 algorithms are much lower ranging from 9.9 and 8.3 for SeaWiFS, respectively, and 8.7 and 6.3 for MODIS, respectively. Seasonal and spatial MODIS-Aqua and SeaWiFS distributions of aCDOM, S275:295 and S300:600 processed with these algorithms are consistent with field measurements and the processes that impact CDOM levels along the continental shelf of the northeastern U.S. Several satellite data processing factors correlate with higher uncertainty in satellite retrievals of aCDOM, S275:295 and S300:600 within the coastal ocean, including solar zenith angle, sensor viewing angle, and atmospheric products applied for atmospheric corrections. Algorithms that include ultraviolet Rrs bands provide a better fit to field measurements than algorithms without the ultraviolet Rrs bands. This suggests that satellite sensors with ultraviolet capability could provide better retrievals of CDOM. Because of the strong correlations between CDOM parameters and DOM constituents in the coastal ocean, satellite observations of CDOM parameters can be applied to study the distributions, sources and sinks of DOM, which are relevant for understanding the carbon cycle, modeling the Earth system, and to discern how the Earth is changing.
Yu, Haitong; Liu, Dong; Duan, Yuanyuan; Wang, Xiaodong
2014-04-07
Opacified aerogels are particulate thermal insulating materials in which micrometric opacifier mineral grains are surrounded by silica aerogel nanoparticles. A geometric model was developed to characterize the spectral properties of such microsize grains surrounded by much smaller particles. The model represents the material's microstructure with the spherical opacifier's spectral properties calculated using the multi-sphere T-matrix (MSTM) algorithm. The results are validated by comparing the measured reflectance of an opacified aerogel slab against the value predicted using the discrete ordinate method (DOM) based on calculated optical properties. The results suggest that the large particles embedded in the nanoparticle matrices show different scattering and absorption properties from the single scattering condition and that the MSTM and DOM algorithms are both useful for calculating the spectral and radiative properties of this particulate system.
Rainfall Estimation over the Nile Basin using an Adapted Version of the SCaMPR Algorithm
NASA Astrophysics Data System (ADS)
Habib, E. H.; Kuligowski, R. J.; Elshamy, M. E.; Ali, M. A.; Haile, A.; Amin, D.; Eldin, A.
2011-12-01
Management of Egypt's Aswan High Dam is critical not only for flood control on the Nile but also for ensuring adequate water supplies for most of Egypt since rainfall is scarce over the vast majority of its land area. However, reservoir inflow is driven by rainfall over Sudan, Ethiopia, Uganda, and several other countries from which routine rain gauge data are sparse. Satellite-derived estimates of rainfall offer a much more detailed and timely set of data to form a basis for decisions on the operation of the dam. A single-channel infrared algorithm is currently in operational use at the Egyptian Nile Forecast Center (NFC). This study reports on the adaptation of a multi-spectral, multi-instrument satellite rainfall estimation algorithm (Self-Calibrating Multivariate Precipitation Retrieval, SCaMPR) for operational application over the Nile Basin. The algorithm uses a set of rainfall predictors from multi-spectral Infrared cloud top observations and self-calibrates them to a set of predictands from Microwave (MW) rain rate estimates. For application over the Nile Basin, the SCaMPR algorithm uses multiple satellite IR channels recently available to NFC from the Spinning Enhanced Visible and Infrared Imager (SEVIRI). Microwave rain rates are acquired from multiple sources such as SSM/I, SSMIS, AMSU, AMSR-E, and TMI. The algorithm has two main steps: rain/no-rain separation using discriminant analysis, and rain rate estimation using stepwise linear regression. We test two modes of algorithm calibration: real-time calibration with continuous updates of coefficients with newly coming MW rain rates, and calibration using static coefficients that are derived from IR-MW data from past observations. We also compare the SCaMPR algorithm to other global-scale satellite rainfall algorithms (e.g., 'Tropical Rainfall Measuring Mission (TRMM) and other sources' (TRMM-3B42) product, and the National Oceanographic and Atmospheric Administration Climate Prediction Center (NOAA-CPC) CMORPH product. The algorithm has several potential future applications such as: improving the performance accuracy of hydrologic forecasting models over the Nile Basin, and utilizing the enhanced rainfall datasets and better-calibrated hydrologic models to assess the impacts of climate change on the region's water availability.
NASA Astrophysics Data System (ADS)
Haris, A.; Pradana, G. S.; Riyanto, A.
2017-07-01
Tectonic setting of the Bird Head Papua Island becomes an important model for petroleum system in Eastern part of Indonesia. The current exploration has been started since the oil seepage finding in Bintuni and Salawati Basin. The biogenic gas in shallow layer turns out to become an interesting issue in the hydrocarbon exploration. The hydrocarbon accumulation appearance in a shallow layer with dry gas type, appeal biogenic gas for further research. This paper aims at delineating the sweet spot hydrocarbon potential in shallow layer by applying the spectral decomposition technique. The spectral decomposition is decomposing the seismic signal into an individual frequency, which has significant geological meaning. One of spectral decomposition methods is Continuous Wavelet Transform (CWT), which transforms the seismic signal into individual time and frequency simultaneously. This method is able to make easier time-frequency map analysis. When time resolution increases, the frequency resolution will be decreased, and vice versa. In this study, we perform low-frequency shadow zone analysis in which the amplitude anomaly at a low frequency of 15 Hz was observed and we then compare it to the amplitude at the mid (20 Hz) and the high-frequency (30 Hz). The appearance of the amplitude anomaly at a low frequency was disappeared at high frequency, this anomaly disappears. The spectral decomposition by using CWT algorithm has been successfully applied to delineate the sweet spot zone.
The Chandra Source Catalog: Algorithms
NASA Astrophysics Data System (ADS)
McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-09-01
Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.
Chung, King
2004-01-01
This review discusses the challenges in hearing aid design and fitting and the recent developments in advanced signal processing technologies to meet these challenges. The first part of the review discusses the basic concepts and the building blocks of digital signal processing algorithms, namely, the signal detection and analysis unit, the decision rules, and the time constants involved in the execution of the decision. In addition, mechanisms and the differences in the implementation of various strategies used to reduce the negative effects of noise are discussed. These technologies include the microphone technologies that take advantage of the spatial differences between speech and noise and the noise reduction algorithms that take advantage of the spectral difference and temporal separation between speech and noise. The specific technologies discussed in this paper include first-order directional microphones, adaptive directional microphones, second-order directional microphones, microphone matching algorithms, array microphones, multichannel adaptive noise reduction algorithms, and synchrony detection noise reduction algorithms. Verification data for these technologies, if available, are also summarized. PMID:15678225
Quantum computation in the analysis of hyperspectral data
NASA Astrophysics Data System (ADS)
Gomez, Richard B.; Ghoshal, Debabrata; Jayanna, Anil
2004-08-01
Recent research on the topic of quantum computation provides us with some quantum algorithms with higher efficiency and speedup compared to their classical counterparts. In this paper, it is our intent to provide the results of our investigation of several applications of such quantum algorithms - especially the Grover's Search algorithm - in the analysis of Hyperspectral Data. We found many parallels with Grover's method in existing data processing work that make use of classical spectral matching algorithms. Our efforts also included the study of several methods dealing with hyperspectral image analysis work where classical computation methods involving large data sets could be replaced with quantum computation methods. The crux of the problem in computation involving a hyperspectral image data cube is to convert the large amount of data in high dimensional space to real information. Currently, using the classical model, different time consuming methods and steps are necessary to analyze these data including: Animation, Minimum Noise Fraction Transform, Pixel Purity Index algorithm, N-dimensional scatter plot, Identification of Endmember spectra - are such steps. If a quantum model of computation involving hyperspectral image data can be developed and formalized - it is highly likely that information retrieval from hyperspectral image data cubes would be a much easier process and the final information content would be much more meaningful and timely. In this case, dimensionality would not be a curse, but a blessing.
Melkonian, D; Korner, A; Meares, R; Bahramali, H
2012-10-01
A novel method of the time-frequency analysis of non-stationary heart rate variability (HRV) is developed which introduces the fragmentary spectrum as a measure that brings together the frequency content, timing and duration of HRV segments. The fragmentary spectrum is calculated by the similar basis function algorithm. This numerical tool of the time to frequency and frequency to time Fourier transformations accepts both uniform and non-uniform sampling intervals, and is applicable to signal segments of arbitrary length. Once the fragmentary spectrum is calculated, the inverse transform recovers the original signal and reveals accuracy of spectral estimates. Numerical experiments show that discontinuities at the boundaries of the succession of inter-beat intervals can cause unacceptable distortions of the spectral estimates. We have developed a measure that we call the "RR deltagram" as a form of the HRV data that minimises spectral errors. The analysis of the experimental HRV data from real-life and controlled breathing conditions suggests transient oscillatory components as functionally meaningful elements of highly complex and irregular patterns of HRV. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Bednarkiewicz, Artur; Whelan, Maurice P
2008-01-01
Fluorescence lifetime imaging (FLIM) is very demanding from a technical and computational perspective, and the output is usually a compromise between acquisition/processing time and data accuracy and precision. We present a new approach to acquisition, analysis, and reconstruction of microscopic FLIM images by employing a digital micromirror device (DMD) as a spatial illuminator. In the first step, the whole field fluorescence image is collected by a color charge-coupled device (CCD) camera. Further qualitative spectral analysis and sample segmentation are performed to spatially distinguish between spectrally different regions on the sample. Next, the fluorescence of the sample is excited segment by segment, and fluorescence lifetimes are acquired with a photon counting technique. FLIM image reconstruction is performed by either raster scanning the sample or by directly accessing specific regions of interest. The unique features of the DMD illuminator allow the rapid on-line measurement of global good initial parameters (GIP), which are supplied to the first iteration of the fitting algorithm. As a consequence, a decrease of the computation time required to obtain a satisfactory quality-of-fit is achieved without compromising the accuracy and precision of the lifetime measurements.
Some practical aspects of lossless and nearly-lossless compression of AVHRR imagery
NASA Technical Reports Server (NTRS)
Hogan, David B.; Miller, Chris X.; Christensen, Than Lee; Moorti, Raj
1994-01-01
Compression of Advanced Very high Resolution Radiometers (AVHRR) imagery operating in a lossless or nearly-lossless mode is evaluated. Several practical issues are analyzed including: variability of compression over time and among channels, rate-smoothing buffer size, multi-spectral preprocessing of data, day/night handling, and impact on key operational data applications. This analysis is based on a DPCM algorithm employing the Universal Noiseless Coder, which is a candidate for inclusion in many future remote sensing systems. It is shown that compression rates of about 2:1 (daytime) can be achieved with modest buffer sizes (less than or equal to 2.5 Mbytes) and a relatively simple multi-spectral preprocessing step.
Digital imaging biomarkers feed machine learning for melanoma screening.
Gareau, Daniel S; Correa da Rosa, Joel; Yagerman, Sarah; Carucci, John A; Gulati, Nicholas; Hueto, Ferran; DeFazio, Jennifer L; Suárez-Fariñas, Mayte; Marghoob, Ashfaq; Krueger, James G
2017-07-01
We developed an automated approach for generating quantitative image analysis metrics (imaging biomarkers) that are then analysed with a set of 13 machine learning algorithms to generate an overall risk score that is called a Q-score. These methods were applied to a set of 120 "difficult" dermoscopy images of dysplastic nevi and melanomas that were subsequently excised/classified. This approach yielded 98% sensitivity and 36% specificity for melanoma detection, approaching sensitivity/specificity of expert lesion evaluation. Importantly, we found strong spectral dependence of many imaging biomarkers in blue or red colour channels, suggesting the need to optimize spectral evaluation of pigmented lesions. © 2016 The Authors. Experimental Dermatology Published by John Wiley & Sons Ltd.
Spectrally And Temporally Resolved Low-Light Level Video Microscopy
NASA Astrophysics Data System (ADS)
Wampler, John E.; Furukawa, Ruth; Fechheimer, Marcus
1989-12-01
The IDG law-light video microscope system was designed to aid studies of localization of subcellular luminescence sources and stimulus/response coupling in single living cells using luminescent probes. Much of the motivation for design of this instrument system came from the pioneering efforts of Dr. Reynolds (Reynolds, Q. Rev. Biophys. 5, 295-347; Reynolds and Taylor, Bioscience 30, 586-592) who showed the value of intensified video camera systems for detection and localizion of fluorescence and bioluminescence signals from biological tissues. Our instrument system has essentially two roles, 1) localization and quantitation of very weak bioluminescence signals and 2) quantitation of intracellular environmental characteristics such as pH and calcium ion concentrations using fluorescent and bioluminescent probes. The instrument system exhibits over one million fold operating range allowing visualization and enhancement of quantum limited images with quantum limited response, spectral analysis of fluorescence signals, and transmitted light imaging. The computer control of the system implements rapid switching between light regimes, spatially resolved spectral scanning, and digital data processing for spectral shape analysis and for detailed analysis of the statistical distribution of single cell measurements. The system design and software algorithms used by the system are summarized. These design criteria are illustrated with examples taken from studies of bioluminescence, applications of bioluminescence to study developmental processes and gene expression in single living cells, and applications of fluorescent probes to study stimulus/response coupling in living cells.
Directly data processing algorithm for multi-wavelength pyrometer (MWP).
Xing, Jian; Peng, Bo; Ma, Zhao; Guo, Xin; Dai, Li; Gu, Weihong; Song, Wenlong
2017-11-27
Data processing of multi-wavelength pyrometer (MWP) is a difficult problem because unknown emissivity. So far some solutions developed generally assumed particular mathematical relations for emissivity versus wavelength or emissivity versus temperature. Due to the deviation between the hypothesis and actual situation, the inversion results can be seriously affected. So directly data processing algorithm of MWP that does not need to assume the spectral emissivity model in advance is main aim of the study. Two new data processing algorithms of MWP, Gradient Projection (GP) algorithm and Internal Penalty Function (IPF) algorithm, each of which does not require to fix emissivity model in advance, are proposed. The novelty core idea is that data processing problem of MWP is transformed into constraint optimization problem, then it can be solved by GP or IPF algorithms. By comparison of simulation results for some typical spectral emissivity models, it is found that IPF algorithm is superior to GP algorithm in terms of accuracy and efficiency. Rocket nozzle temperature experiment results show that true temperature inversion results from IPF algorithm agree well with the theoretical design temperature as well. So the proposed combination IPF algorithm with MWP is expected to be a directly data processing algorithm to clear up the unknown emissivity obstacle for MWP.
TES Level 1 Algorithms: Interferogram Processing, Geolocation, Radiometric, and Spectral Calibration
NASA Technical Reports Server (NTRS)
Worden, Helen; Beer, Reinhard; Bowman, Kevin W.; Fisher, Brendan; Luo, Mingzhao; Rider, David; Sarkissian, Edwin; Tremblay, Denis; Zong, Jia
2006-01-01
The Tropospheric Emission Spectrometer (TES) on the Earth Observing System (EOS) Aura satellite measures the infrared radiance emitted by the Earth's surface and atmosphere using Fourier transform spectrometry. The measured interferograms are converted into geolocated, calibrated radiance spectra by the L1 (Level 1) processing, and are the inputs to L2 (Level 2) retrievals of atmospheric parameters, such as vertical profiles of trace gas abundance. We describe the algorithmic components of TES Level 1 processing, giving examples of the intermediate results and diagnostics that are necessary for creating TES L1 products. An assessment of noise-equivalent spectral radiance levels and current systematic errors is provided. As an initial validation of our spectral radiances, TES data are compared to the Atmospheric Infrared Sounder (AIRS) (on EOS Aqua), after accounting for spectral resolution differences by applying the AIRS spectral response function to the TES spectra. For the TES L1 nadir data products currently available, the agreement with AIRS is 1 K or better.
NASA Astrophysics Data System (ADS)
Ma, Suodong; Pan, Qiao; Shen, Weimin
2016-09-01
As one kind of light source simulation devices, spectrally tunable light sources are able to generate specific spectral shape and radiant intensity outputs according to different application requirements, which have urgent demands in many fields of the national economy and the national defense industry. Compared with the LED-type spectrally tunable light source, the one based on a DMD-convex grating Offner configuration has advantages of high spectral resolution, strong digital controllability, high spectrum synthesis accuracy, etc. As a key link of the above type light source to achieve target spectrum outputs, spectrum synthesis algorithm based on spectrum matching is therefore very important. An improved spectrum synthesis algorithm based on linear least square initialization and Levenberg-Marquardt iterative optimization is proposed in this paper on the basis of in-depth study of the spectrum matching principle. The effectiveness of the proposed method is verified by a series of simulations and experimental works.
Spectral CT Reconstruction with Image Sparsity and Spectral Mean
Zhang, Yi; Xi, Yan; Yang, Qingsong; Cong, Wenxiang; Zhou, Jiliu
2017-01-01
Photon-counting detectors can acquire x-ray intensity data in different energy bins. The signal to noise ratio of resultant raw data in each energy bin is generally low due to the narrow bin width and quantum noise. To address this problem, here we propose an image reconstruction approach for spectral CT to simultaneously reconstructs x-ray attenuation coefficients in all the energy bins. Because the measured spectral data are highly correlated among the x-ray energy bins, the intra-image sparsity and inter-image similarity are important prior acknowledge for image reconstruction. Inspired by this observation, the total variation (TV) and spectral mean (SM) measures are combined to improve the quality of reconstructed images. For this purpose, a linear mapping function is used to minimalize image differences between energy bins. The split Bregman technique is applied to perform image reconstruction. Our numerical and experimental results show that the proposed algorithms outperform competing iterative algorithms in this context. PMID:29034267
Onboard Science and Applications Algorithm for Hyperspectral Data Reduction
NASA Technical Reports Server (NTRS)
Chien, Steve A.; Davies, Ashley G.; Silverman, Dorothy; Mandl, Daniel
2012-01-01
An onboard processing mission concept is under development for a possible Direct Broadcast capability for the HyspIRI mission, a Hyperspectral remote sensing mission under consideration for launch in the next decade. The concept would intelligently spectrally and spatially subsample the data as well as generate science products onboard to enable return of key rapid response science and applications information despite limited downlink bandwidth. This rapid data delivery concept focuses on wildfires and volcanoes as primary applications, but also has applications to vegetation, coastal flooding, dust, and snow/ice applications. Operationally, the HyspIRI team would define a set of spatial regions of interest where specific algorithms would be executed. For example, known coastal areas would have certain products or bands downlinked, ocean areas might have other bands downlinked, and during fire seasons other areas would be processed for active fire detections. Ground operations would automatically generate the mission plans specifying the highest priority tasks executable within onboard computation, setup, and data downlink constraints. The spectral bands of the TIR (thermal infrared) instrument can accurately detect the thermal signature of fires and send down alerts, as well as the thermal and VSWIR (visible to short-wave infrared) data corresponding to the active fires. Active volcanism also produces a distinctive thermal signature that can be detected onboard to enable spatial subsampling. Onboard algorithms and ground-based algorithms suitable for onboard deployment are mature. On HyspIRI, the algorithm would perform a table-driven temperature inversion from several spectral TIR bands, and then trigger downlink of the entire spectrum for each of the hot pixels identified. Ocean and coastal applications include sea surface temperature (using a small spectral subset of TIR data, but requiring considerable ancillary data), and ocean color applications to track biological activity such as harmful algal blooms. Measuring surface water extent to track flooding is another rapid response product leveraging VSWIR spectral information.
NASA Technical Reports Server (NTRS)
DeBaca, Richard C.; Sarkissian, Edwin; Madatyan, Mariyetta; Shepard, Douglas; Gluck, Scott; Apolinski, Mark; McDuffie, James; Tremblay, Dennis
2006-01-01
TES L1B Subsystem is a computer program that performs several functions for the Tropospheric Emission Spectrometer (TES). The term "L1B" (an abbreviation of "level 1B"), refers to data, specific to the TES, on radiometric calibrated spectral radiances and their corresponding noise equivalent spectral radiances (NESRs), plus ancillary geolocation, quality, and engineering data. The functions performed by TES L1B Subsystem include shear analysis, monitoring of signal levels, detection of ice build-up, and phase correction and radiometric and spectral calibration of TES target data. Also, the program computes NESRs for target spectra, writes scientific TES level-1B data to hierarchical- data-format (HDF) files for public distribution, computes brightness temperatures, and quantifies interpixel signal variability for the purpose of first-order cloud and heterogeneous land screening by the level-2 software summarized in the immediately following article. This program uses an in-house-developed algorithm, called "NUSRT," to correct instrument line-shape factors.
Neural net classification of REM sleep based on spectral measures as compared to nonlinear measures.
Grözinger, M; Fell, J; Röschke, J
2001-11-01
In various studies the implementation of nonlinear and nonconventional measures has significantly improved EEG (electroencephalogram) analyses as compared to using conventional parameters alone. A neural network algorithm well approved in our laboratory for the automatic recognition of rapid eye movement (REM) sleep was investigated in this regard. Originally based on a broad range of spectral power inputs, we additionally supplied the nonlinear measures of the largest Lyapunov exponent and correlation dimension as well as the nonconventional stochastic measures of spectral entropy and entropy of amplitudes. No improvement in the detection of REM sleep could be achieved by the inclusion of the new measures. The accuracy of the classification was significantly worse, however, when supplied with these variables alone. In view of results demonstrating the efficiency of nonconventional measures in EEG analysis, the benefit appears to depend on the nature of the problem.
Makkai, Géza; Buzády, Andrea; Erostyák, János
2010-01-01
Determination of concentrations of spectrally overlapping compounds has special difficulties. Several methods are available to calculate the constituents' concentrations in moderately complex mixtures. A method which can provide information about spectrally hidden components in mixtures is very useful. Two methods powerful in resolving spectral components are compared in this paper. The first method tested is the Derivative Matrix Isopotential Synchronous Fluorimetry (DMISF). It is based on derivative analysis of MISF spectra, which are constructed using isopotential trajectories in the Excitation-Emission Matrix (EEM) of background solution. For DMISF method, a mathematical routine fitting the 3D data of EEMs was developed. The other method tested uses classical Least Squares Fitting (LSF) algorithm, wherein Rayleigh- and Raman-scattering bands may lead to complications. Both methods give excellent sensitivity and have advantages against each other. Detection limits of DMISF and LSF have been determined at very different concentration and noise levels.
NASA Astrophysics Data System (ADS)
Schmalz, M.; Ritter, G.; Key, R.
Accurate and computationally efficient spectral signature classification is a crucial step in the nonimaging detection and recognition of spaceborne objects. In classical hyperspectral recognition applications using linear mixing models, signature classification accuracy depends on accurate spectral endmember discrimination [1]. If the endmembers cannot be classified correctly, then the signatures cannot be classified correctly, and object recognition from hyperspectral data will be inaccurate. In practice, the number of endmembers accurately classified often depends linearly on the number of inputs. This can lead to potentially severe classification errors in the presence of noise or densely interleaved signatures. In this paper, we present an comparison of emerging technologies for nonimaging spectral signature classfication based on a highly accurate, efficient search engine called Tabular Nearest Neighbor Encoding (TNE) [3,4] and a neural network technology called Morphological Neural Networks (MNNs) [5]. Based on prior results, TNE can optimize its classifier performance to track input nonergodicities, as well as yield measures of confidence or caution for evaluation of classification results. Unlike neural networks, TNE does not have a hidden intermediate data structure (e.g., the neural net weight matrix). Instead, TNE generates and exploits a user-accessible data structure called the agreement map (AM), which can be manipulated by Boolean logic operations to effect accurate classifier refinement algorithms. The open architecture and programmability of TNE's agreement map processing allows a TNE programmer or user to determine classification accuracy, as well as characterize in detail the signatures for which TNE did not obtain classification matches, and why such mis-matches occurred. In this study, we will compare TNE and MNN based endmember classification, using performance metrics such as probability of correct classification (Pd) and rate of false detections (Rfa). As proof of principle, we analyze classification of multiple closely spaced signatures from a NASA database of space material signatures. Additional analysis pertains to computational complexity and noise sensitivity, which are superior to Bayesian techniques based on classical neural networks. [1] Winter, M.E. "Fast autonomous spectral end-member determination in hyperspectral data," in Proceedings of the 13th International Conference On Applied Geologic Remote Sensing, Vancouver, B.C., Canada, pp. 337-44 (1999). [2] N. Keshava, "A survey of spectral unmixing algorithms," Lincoln Laboratory Journal 14:55-78 (2003). [3] Key, G., M.S. SCHMALZ, F.M. Caimi, and G.X. Ritter. "Performance analysis of tabular nearest neighbor encoding algorithm for joint compression and ATR", in Proceedings SPIE 3814:115-126 (1999). [4] Schmalz, M.S. and G. Key. "Algorithms for hyperspectral signature classification in unresolved object detection using tabular nearest neighbor encoding" in Proceedings of the 2007 AMOS Conference, Maui HI (2007). [5] Ritter, G.X., G. Urcid, and M.S. Schmalz. "Autonomous single-pass endmember approximation using lattice auto-associative memories", Neurocomputing (Elsevier), accepted (June 2008).